Total Innovation Podcast
Welcome to "Total Innovation," the podcast where I explore all the different aspects of innovation, transformation and change. From the disruptive minds of startup founders to the strategic meeting rooms of global giants, I bring you the stories of change-makers. The podcast will engage with different voices, and peer into the multi-faceted world of innovation across and within large organisations.
I speak to those on the ground floor, the strategists, the analysts, and the unsung heroes who make innovation tick. From technology breakthroughs to cultural shifts within companies, I'm on a quest to understand how innovation breathes new life into business.
I embrace the diversity of thoughts, backgrounds, and experiences that inform and drive the corporate renewal and evolution from both sides of the microphone. The Total Innovation journey will take you through the challenges, the victories, and the lessons learned in the ever-evolving landscape of innovation.
Join me as we explore the narratives of those shaping the market, those writing about it, and those doing the hard work. This is "Total Innovation," where every voice counts and every story matters.
Brought to you by The Infinite Loop – Where Ideas Evolve, Knowledge Flows, and Innovation Never Stops.
Powered by Wazoku, helping to Change the World, One Idea at a Time.
Total Innovation Podcast
35. Expected Value - Chapter 11
In this episode we step into chapter 11, learning loops and dynamic resourcing, where Freya discovers the fundamental truth at the heart of innovation performance. Namely, the value of an innovation is directly proportional to how much we learn from it, irrespective of whether it succeeds or fails.
This chapter begins with tension. A major project is under scrutiny, the CFO demands evidence, and Freya realizes the team has been learning, but not showing the learning. What follows is a profound shift from reporting activity to reporting confidence, from fixed plans to dynamic resource allocation, and from rigid portfolios to living systems that adapt at the pace of evidence.
We explore how Freya builds learning velocity, tracks X V deltas, reframes killed decisions, and introduces dynamic resourcing that moves money, talent, and attention to where learning is happening fastest.
It's the moment where innovation stops being a bet and becomes a learning engine, and it sets the foundation for accelerating the journey from expected to realized value
What's a birthing? Uh uh uh uh uh uh uh uh What's a Barth?
SPEAKER_00:Uh-uh uh uh uh Welcome back to the Total Innovation Podcast and to another exclusive chapter from the book, Expected Value, written and read by me, Simon Hill. Today, we step into chapter 11, learning loops and dynamic resourcing, where Freya discovers the fundamental truth at the heart of innovation performance. Namely, the value of an innovation is directly proportional to how much we learn from it, irrespective of whether it succeeds or fails. This chapter begins with tension. A major project is under scrutiny, the CFO demands evidence, and Freya realizes the team has been learning, but not showing the learning. What follows is a profound shift from reporting activity to reporting confidence, from fixed plans to dynamic resource allocation, and from rigid portfolios to living systems that adapt at the pace of evidence. We explore how Freya builds learning velocity, tracks X V deltas, reframes killed decisions, and introduces dynamic resourcing that moves money, talent, and attention to where learning is happening fastest. It's the moment where innovation stops being a bet and becomes a learning engine, and it sets the foundation for accelerating the journey from expected to realized value. The value of an innovation is directly proportional to how much we learn from it, whether it succeeds or fails. Freya leaned against the wall of the conference room, watching the executive team debate the fate of Project Phoenix. We've spent nearly two hundred fifty thousand dollars on this. And all I'm hearing is that we're still not sure if it will work, said David, the CFO. How much more do we need to spend before we have an answer? Axel tried to explain. Innovation isn't linear. Sometimes you have to explore before you can answer the question. I understand exploration, David interrupted, but at some point we need to make decisions based on evidence, not hope. What have we actually learned from all this investment? The room fell silent. It was a fair question, and one they struggled to answer convincingly. After the meeting, Freya pulled Axel aside. David is right. We're treating learning as a side effect when it should be our primary product at this stage. But how do we measure learning? Axel asked. It's not exactly something you can put on a spreadsheet. Freya smiled. Actually, I think it is. I've been thinking about what David said, she told the team. The problem isn't that we're not learning, it's that we're not tracking what we learn or making it visible. She pinned her diagram to the wall. This is what we should be reporting, she explained. Not just money spent but confidence gained. It changes the conversation from how much have we spent to what have we learned? Axel studied the diagram. I like the transparency, but that cost per confidence point is alarming. Exactly, said Freya. It forces us to ask if we're learning in the most efficient way possible.
SPEAKER_01:From opinions to evidence. The team decided to implement the confidence tracking system across their entire portfolio. For each project, they would break down confidence into key components market, technical, business model, etc. Two rate each component on a scale from 0.1 to 1.0. 3. Track changes in confidence after each learning cycle. 4. Calculate the cost per confidence point gained.
SPEAKER_00:Axel created a digital dashboard that showed confidence trends across projects. The visualization made patterns immediately apparent. Look at these two projects, he said, pointing to the screen. Quantum has gained 0.15 confidence points in three months, while Nova has gained zero point two five in just one month. What's Nova doing differently? asked Freya. They're using rapid low cost experiments instead of building comprehensive prototypes, Axel explained. Their learning cycles are weekly instead of monthly. Freya nodded. They're getting faster feedback at lower cost. We need to apply that approach more broadly.
SPEAKER_04:The learning velocity dashboard.
SPEAKER_00:Inspired by the insights from confidence tracking, the team developed a more comprehensive learning system. Each project now had a learning velocity metric measured as confidence points gained per month. The velocity metric is transformative, the COO observed during the next portfolio review. It shifts our focus from timeline adherence to learning rate. And it helps us make better resource allocation decisions, added Freya. We should invest more in projects that are learning quickly and reconsider our approach for those that aren't. The rhythm of innovation is nonlinear. One of the most frustrating truths about innovation is that progress isn't predictable. Some ideas stall, then spike. Others show early traction, then taper off. A few surprise everyone. What matters isn't just where an idea is now, but how it's moving. Freya made a simple request to Axel. Track the X V Delta for every idea every month, not just the score, the change. Is confidence growing or declining? Is value expanding or compressing? Has time sensitivity shifted? she asked, during their planning session. Axel nodded thoughtfully. We're looking for velocity, not just position. Exactly, Freya replied. I want to see which ideas are gaining momentum and which are stalling. I want to know where our learning is accelerating and where it's blocked. This small shift changed everything. Ideas were no longer frozen snapshots, they became dynamic storylines, and the deltas became signals, indicators of learning.
SPEAKER_04:The learning signal system.
SPEAKER_00:Within weeks, Axel had built what they called the learning signal dashboard. It wasn't complex, a simple visualization showing each innovation initiative with directional indicators. Green arrows pointing upward showed initiatives where confidence was building based on positive evidence. Yellow horizontal lines indicated stable but unmoving confidence. Red arrows pointing downward revealed declining confidence as assumptions were invalidated or challenges emerged. What made this dashboard powerful wasn't just its simplicity, it was how it shifted attention from status to trajectory. Team conversations transformed from where are we to TE. What are we learning? And how fast? In their first review using the new dashboard, a pattern emerged immediately. Several high profile initiatives showed flat or declining confidence despite considerable time and resource investment. Meanwhile, a few smaller projects were showing rapid confidence acceleration with minimal resources. We're over investing in ideas that aren't generating learning and underinvesting in ones that are, Freya observed. This insight revealed a fundamental misalignment in how resources were allocated across the portfolio. The organization was following the traditional budget cycle, assigning resources annually based on forecasted potential rather than actual learning momentum. We need a different approach, Freya said, one that matches the nonlinear nature of innovation itself. Learning loop Example Evolving Tensitivity. One of the most powerful aspects of the XV system is how it captures changing market conditions through updates to all factors, including time sensitivity. Consider this evolution of a digital identity solution over three quarters. Q1 assessment confidence 0.4 early validation Predicted value$2 million Time sensitivity 0.8 Strategic delay market and regulatory framework still developing Strategic Fit 1.1 Strong Strategic Alignment with digital transformation agenda XV equals$704,000 Q2 assessment Confidence 0.5 increasing through testing Predicted Value$2,300,000 refined based on market research Time sensitivity 1.0 neutral regulatory clarity emerging window approaching Strategic fit 1.1 maintain strong strategic alignment X of it equals$1,265 Q three assessment Confidence zero point six strong pilot results Predicted value two million five hundred thousand dollars expanded use cases identified Time sensitivity one point four high urgency a competitor announced similar solution window narrowing rapidly Strategic fits one point two increased a solution shows alignment with additional strategic priorities XFiat equals two million five hundred twenty thousand dollars This trajectory illustrates why dynamic assessment is so critical, Freya explained to the governance board. The initiatives X V more than tripled over six months, not just because our confidence grew or value estimates improved, but because the optimal timing window shifted dramatically and strategic fit strengthened. What began as a wait and prepare opportunity transformed into an urgent priority due to changing market conditions and increased strategic relevance. Without the time sensitivity and fit components, Axel added, we might have missed these shifts entirely or relied on subjective urgency claims rather than structured assessment. This example became a cornerstone of their learning loop process, demonstrating how all components of XV, not just confidence, needed regular reassessment based on emerging evidence. Challenge driven learning loops The learning signal approach naturally connected to the challenge driven mindset that had been influencing Freya's innovation system from the beginning. By framing innovation around specific business challenges rather than abstract ideas, the team could evaluate learning in a more focused way. When we clearly define the challenge we're trying to solve, learning becomes more directed and measurable, Freya explained to her team. We're not just learning for learning's sake, we're learning specifically what works and doesn't work in addressing our most important problems. This challenge driven context created what Freya called purposeful learning loops, structured cycles of hypothesis formation, testing and evidence evaluation explicitly linked to challenge resolution. For each major challenge in their portfolio the team maintained a challenge learning canvas that tracked key unknowns preventing challenge resolution, experiments designed to address those unknowns, evidence gathered and its implications, emerging insights about the challenge itself and learning priorities to advance resolution. The challenge frame gives learning direction, Axel observed. It helps us evaluate not just what we're learning but whether that learning is actually bringing us closer to solving the problem that matters. This approach elevated learning from a generic activity to a strategic capability directly connected to business outcomes. As teams got better at articulating challenges precisely their learning became more efficient, focusing resources on resolving the critical unknowns rather than exploring tangential questions. The challenge focus also created natural links between learning and value. When learning was explicitly tied to challenge resolution it became easier to trace how that learning contributed to eventual value realization. Each insight generated wasn't just interesting, it was a step toward solving a problem the business had already determined was worth solving. Challenge driven learning closes the gap between exploration and impact, Freya told the portfolio committee it ensures that even early stage learning has a clear path to value realization. Evidence over ego Freya quickly noticed something unexpected. The more the team focused on movement over milestones the less emotionally attached they became to individual ideas. Kill decisions weren't viewed as losses they were reframed as closing loops. One idea, a low code data integration pilot, had shown early promise. It scored well on XV on strategic fee T. It cleared governance but after six weeks of testing XV dropped by forty percent confidence declined and the perceived value narrowed as complexity surfaced. In the past this might have triggered a justification loop can we salvage it? Now it triggered a review. What did we learn? Where else could it apply? What capacity does this release? The team logged it, tagged the insights and moved on and then almost by accident that learning surfaced two weeks later in a completely different initiative. Innovation wasn't just accelerating it was compounding the risk capital approach. As the learning loops matured Freya recognized they needed a formal way to account for the value of knowledge generated through exploration particularly for initiatives that didn't reach implementation. Working with David she developed what they called the innovation risk capital approach. Traditional accounting treats innovation investment as an expense, Freyer explained to the finance team but when we make deliberate learning investments that generate valuable knowledge that's not just an expense it's the creation of an intellectual asset. The risk capital model introduced a new balance sheet item the innovation risk capital reserve representing the total pre-approved learning investment across the portfolio. This reserve was allocated to specific initiatives using a formula that incorporated confidence, time sensitivity and fit. Risk capital quotient RCQ equals confidence times time sensitivity times strategic fit. In practical terms this meant early stage initiatives, those with low confidence but promising strategic fit, receive calculated allocations from the risk capital reserve, while initiatives showing strong evidence progression would justify increased investment. The model created a systematic auditable framework for what many organizations traditionally handled through gut feel or politics. We're not changing financial standards, David explained to his finance colleagues we're applying established intangible asset accounting principles to innovation learning in a structured, auditable way. The approach transformed how the organization viewed innovation investment failed experiments that generated valuable insights were no longer hidden as embarrassing expenses but recognized as contributions to the organization's knowledge assets. This created cultural permission for intelligent failure while maintaining financial discipline the risk capital reserve gives us a balance sheet language for innovation that CFOs understand, Freya told her team it converts learning from an abstract value to a concrete asset AI powered learning acceleration As the learning loop's approach matured, Freya recognized an opportunity to enhance the system through artificial intelligence. Working with the technology team she developed a set of AI tools designed to accelerate and deepen learning across the innovation portfolio. AI gives us the ability to identify patterns in our learning that might take humans months or years to recognize, she explained to her team. It doesn't replace human judgment but it dramatically expands our capacity to learn effectively. The AI enhancement took several forms one the learning pattern recognition engine analyzed data across experiments to identify common success and failure patterns. It could detect for instance that certain types of user interfaces consistently increased adoption rates across different solutions or that particular technical approaches repeatedly encountered integration problems. Two cross domain insight generation use natural language processing to connect learning from seemingly unrelated projects It might identify that a customer onboarding challenge in one business unit had significant parallels to an employee training issue in another suggesting solution approaches that could be transferred. Three experiment design optimization leveraged historical experiment data are to suggest more effective test designs. By analyzing what types of experiments have generated the most decisive learning in similar contexts it helped teams design higher yield experiments from the start four learning velocity prediction analyze the characteristics of initiatives showing rapid confidence movement versus those stalling helping the team identify early signals of high learning potential these AI capabilities didn't automate innovation decisions but they dramatically increased the signal to noise ratio in the learning process. Teams could more quickly identify which experiments were most informative, which patterns were meaningful and which insights might apply across domains. The AI doesn't tell us what to do, Axel explained it helps us see connections and patterns that make our human decision making more effective this approach transformed how the organization thought about the role of AI in innovation, not as a replacement for human creativity but as an amplifier of human learning capacity. The combination of human insights and machine pattern recognition created a learning system far more powerful than either could achieve alone. The team also implemented clear ethical guidelines for their AI systems, including regular bias checks, transparent reasoning explanations and human oversight of all significant decisions. This ensured that their AI tools wouldn't perpetuate existing biases or create new blind spots in the learning process The Learning Canvas To formalize this approach to learning Freya and Axel developed what they called the Learning Canvas. For each active initiative the team maintained a living document structured around four key questions 1 What do we need to learn to increase confidence? 2 What experiments will generate this learning 3 what evidence would change our confidence significantly 4 how will we capture and share what we learn the learning canvas shows how teams structured their learning plans with sections for assumptions to test, experiment designs, evidence thresholds and knowledge sharing approaches This canvas became the centerpiece of regular review sessions. Rather than status updates focused on activities completed teams discussed learning progress, what they now knew that they didn't know before what assumptions had been validated or invalidated and what new questions had emerged. The canvas wasn't just a documentation tool it fundamentally changed how teams approached innovation work. They became more deliberate about designing experiments that would yield meaningful learning. They became more honest about evidence that contradicted their initial hypotheses and they became more efficient at extracting value from both successful and unsuccessful initiatives. We're not paid to be right, Freya reminded her team we're paid to get more right over time lead user learning tapping the innovation underground as Freya's team developed their learning loops they discovered an often overlooked source of innovation intelligence lead users within the organization who had already developed their own solutions to pressing challenges. Every organization has unofficial innovators people who don't wait for formal processes but create workarounds and improvements to solve their daily problems, Freya explained to the portfolio committee They're a gold mine of learning that we rarely tap systematically working with Axel Freya developed a lead user learning network specifically designed to capture and leverage this grassroots innovation. The approach included several interconnected elements A I enabled lead user discovery They created an AI system that scanned internal platforms, support tickets and communication networks to identify patterns suggesting lead user innovation. The system looked for language indicating workarounds, unofficial tools or performance outliers that might reveal innovative approaches The AI doesn't just find the solutions, it helps us understand the problems that drive people to create their own fixes, Axel explained, that often reveals gaps in our formal systems that we wouldn't otherwise see Rapid Learning extraction rather than trying to formalize these innovations immediately, which often stripped away their contextual value, the team developed rapid learning extraction protocols. These lightweight interviews and observation sessions captured not just what the lead users had created, but why they'd created it, what they'd learned in the process, and how they measured success. The most valuable insights often aren't in the solution itself but in the journey the lead user took to create it, Freya noted. They've already run experiments we'd otherwise have to design from scratch Connected learning communities The team established learning communities that connected lead users across different functions with similar challenges. These communities became powerful engines for accelerating learning as solutions that had evolved independently could be compared, combined and enhanced. When we connect people who've been solving similar problems in isolation learning doesn't just add up it multiplies, Axel observed. Pathways to scale For the most promising lead user innovations, the team created structured pathways to formally integrate them into the innovation portfolio. This wasn't about taking over the innovations but about providing resources, technical support and organizational backing to help them scale more effectively We're not trying to control these innovations, Freya explained to the governance board we're trying to amplify them to help good solutions reach more people without losing what makes them effective The Lead user learning network dramatically accelerated learning velocity across the organization, instead of starting from scratch on many challenges, the team could build on the experiments and insights that lead users had already generated. This not only saved time and resources but often led to more contextually appropriate solutions. Lead users have already done the hardest part of innovation, figuring out what actually works in the real world, Freya told her team our job is to learn from them, connect them and help scale what's working This approach transformed how the organization thought about innovation sources, no longer seeing the formal innovation function as the primary generator of new ideas, but as the orchestrator of a distributed innovation ecosystem that existed throughout the organization The Power of Lightweight Reviews To support the learning loops, Freya embedded a regular rhythm of portfolio reflection. Monthly portfolio reviews led by innovation leads focused on XV deltas, fit shift alerts and time sensitivity updates. These weren't progress updates they were pattern recognition. I noticed three of our AI initiatives are showing declining confidence, someone might observe Are we seeing a common barrier? Our sustainability projects are all showing accelerating confidence, another might point out What's driving that momentum and can we apply it elsewhere? These pattern recognition discussions revealed insights that wouldn't have been visible when looking at initiatives in isolation Quarterly resource reallocation sessions brought together cross functional stakeholders including finance. They focused on three questions one where is momentum growing two where are we stuck? three where should we redirect effort? They kept these reviews light, no bloated decks, no grand theater, just conversation around data. The learning velocity on the customer analytics project has stalled, Freya might note. Meanwhile the logistics optimization initiative is showing rapid confidence growth with minimal resources. Should we shift some capacity from the former to the latter? These weren't abstract discussions they resulted in concrete resource decisions, moving people, budget and attention to where learning was happening fastest and value potential was being confirmed. Over time a new behavior emerged teams started requesting reallocations themselves not because they were failing but because they saw where value could move faster. That was the shift from defending investments to optimizing momentum from static budgets to dynamic health. Freya knew the real challenge would come during the annual planning cycle the business still operated on classic budget lines projects funded in Q1 locked in by Q2 reviewed by year end. But innovation doesn't follow fiscal years. Fixed budgets create an unintended trap. Annual budgets optimize for spend not value. They freeze choices before Evidence arrives, encourage spend it or lose it behavior and keep weak projects alive. Pre-committed spend means funding is locked early, teams learn late, and pivots become slow and expensive. Replace big commitments with short tranches. At each review, bring an evidence pack with updated XV, XV efficiency, and the fit radar. Scale only when confidence and fit rise together, otherwise pause or change the scope. Sunk cost drift results in work continuing because it already has funding, even as XV falls and strategic fit erodes. Set a clear floor for XV and an automatic review when the delta is material, for example, twenty five percent. When a project drops below the floor, stop it. Recycle assets and talent, and reallocate to the highest XV per constraint. Set and forget portfolios are too common, plans age while markets move. Signals are seen but not funded and timing windows are missed. Run a monthly portfolio room that rebalances on XV changes and capacity, and a quarterly shift of ten to twenty percent of total spend. Use time sensitivity to direct pace where speed compounds value. A living portfolio does the opposite, it moves money and people as signals change, guided by XV, fits, time sensitivity and XV efficiency as the cost to deliver one dollar of realized value. A fixed budget is one big bet made too early. A living budget is a series of small reversible bets that compound learning and value. Freya pitched the change to a dynamic budget, at least partially. Working with David, the CFO, she carved out a portion of the innovation budget, just 15%, and made it fully dynamic. It could be reallocated quarterly based on X V growth, confidence maturity, FIT reassessments, and X V efficiency scores. It was a small amount, but symbolically massive. The conversation with David wasn't easy. Like most CFOs, he valued predictability and control in financial plannings. The idea of a floating budget that could shift between initiatives throughout the year ran counter to established finance processes. I understand the need for flexibility, he said during their initial discussion, but we need guardrails. I can't have a free for all with company resources. Freya nodded. I'm not asking for a blank check, I'm asking for a responsive investment approach that allows us to double down on what's working and pull back from what isn't, just like any smart investor would. What ultimately convinced David wasn't theory but evidence. Freya showed him historical data on how confidence had evolved across previous initiatives, highlighting how the static funding model had forced continued investment in declining opportunities while leaving accelerating opportunities resource constrained. If we had been able to reallocate just 15% of last year's innovation budget based on learning signals, she demonstrated, we could have increased our innovation impact by nearly forty percent. The logic was compelling. David agreed to the dynamic allocation model with three conditions. One, clear governance over reallocation decisions. Two, transparent metrics showing the impact of reallocations. Third quarterly reconciliation with finance systems. This wasn't a victory over finance, it was a partnership with finance, one that recognized both the need for accountability and the need for adaptability. Efficiency weighted reallocation The team's approach to dynamic resourcing evolved significantly when they began incorporating efficiency metrics into their decisions. Learning velocity is important, Axel explained during the third portfolio review, but learning efficiency might be even more critical. If initiative A is learning twenty percent faster than initiative B but costs five times more per learning cycle, where should we invest? Freya nodded thoughtfully. We need to optimize for knowledge gained per pound invested, not just speed of learning. They gathered around the whiteboard as Axel sketched out a new formula learning efficiency equals open parenthesis. Confidence growth rate times value potential close parenthesis divided by investment rate. This changes how we allocate resources, David observed. Projects that are capital efficient in their learning should get more runway even if they're not the fastest movers in absolute terms. Freya pulled up the current allocation spreadsheet. Let's run the numbers with this new lens and see what happens. The results were striking. Before efficiency weighting, high cost internal project, virtual reality interface forty percent of resources Medium cost hybrid project, customer analytics platform thirty five percent of resources low cost open innovation sustainability challenge twenty five percent of resources after efficiency weighting high cost internal project fifteen percent of resources, medium cost hybrid project thirty percent of resources and low cost open innovation project fifty five percent of resources. Result a three point two times increase in learning per pound. Invested the virtual reality interface project is generating valuable insights, Axel admitted, but at eighteen thousand dollars per confidence point gained, it's enormously expensive compared to the sustainability challenge at one thousand two hundred dollars per confidence point. This isn't about abandoning promising initiatives, Freya clarified, seeing the concern on some team members' faces. It's about finding more efficient ways to learn what we need to know. The team implemented a new approach to resource allocation decisions as follows one efficiency tiers. Projects were categorized into efficiency tiers based on their cost per learning cycle. two milestone based funding. High cost learning approaches received smaller tranches of funding with more frequent review gates. three method diversification. Any project with above average learning costs was required to test at least one alternative lower cost learning approach. The COO, who had initially been skeptical, became a convert after seeing the results six weeks later. By shifting resources to more efficient learning mechanisms, we've actually accelerated our overall portfolio progress while reducing our burn rates by 42%. The quarterly executive update highlighted this transformation. David, the CFO, traditionally the most challenging stakeholder for innovation teams, actually smiled during the presentation. For the first time, he said, I feel like we're treating learning as a genuine investment with measurable returns, not just an excuse to spend money on experiments. Freya recognized this as a pivotal moment. By integrating efficiency into their learning system, they transformed the conversation from how much can we afford to learn to how can we learn the most with our available resources? The impact extended beyond financial metrics. Teams became more creative in designing low cost experiments, seeking external partners who could reduce learning costs and leveraging existing data before generating new research. We're not just learning more efficiently, Axel observed. We're asking better questions because we're more conscious of the cost of answering them. As they updated their learning loops framework, efficiency became the third critical dimension alongside velocity and quality. The monthly learning sessions now included an efficiency spotlight, where teams shared their most cost-effective learning tactics. This knowledge sharing further accelerated the transformation as successful approaches quickly spread across the portfolio. Three months after implementing efficiency weighted reallocation, the team had reduced total learning costs by 58%, increased the number of active experiments by 340%, and improved overall portfolio confidence by 23%. This isn't just about doing more with less, Freya explained to the leadership team. It's about removing the artificial constraints we put on learning when we use high-cost approaches by default. The team had discovered that efficient learning wasn't just financially advantageous, it was strategically superior. By prioritizing approaches that generated insights at lower cost, they could explore more options, test more hypotheses, and build a more diverse innovation portfolio without requiring additional resources. In innovation, Axel concluded, the organization that can learn the most efficiently has an almost insurmountable advantage. They can explore ten paths for the cost of their competitors one. Implementation guide starting with dynamic resourcing. For organizations wishing to implement dynamic resourcing, this approach can start small and expand as success is demonstrated. One start with five to ten percent of your innovation budget as a dynamic pool, especially in risk averse organizations. two set clear parameters for when reallocation is permitted. For example, confidence shifts of more than twenty percent. three create transparent metrics that all stakeholders can see showing both what's being reallocated and why four. Hold quarterly reviews with finance showing the impact of previous reallocations five. Document success stories where dynamic reallocation led to demonstrably better outcomes. This incremental approach builds trust and provides evidence for expanding the dynamic portion over time. And then Freya went a step further. She made the dynamic budget talent agnostic. Rather than tying every pound to internal teams, Freya embedded a standing assumption that open innovation and open talents were not exceptions, but default options. If internal capacity was low or risk was high, her team could turn to one of more of the global inaccentive solver community to crowdsource technical or market solutions, freelance design, data science or build specialists from a vetted pool, startup partnerships for rapid prototyping, domain experts for shortburst advisory sprints. This wasn't outsourcing, it was agile augmentation. Axel called it their liquid layer, the ability to flex capacity in weeks, not quarters. And it worked. One early stage idea, an AI-powered sustainability dashboard, was held back by lack of in-house capability. Within ten days, Freya's team had sawassed a working prototype through a mix of open talents and a solver challenge, validated early assumptions and rescored the XV. The idea went from backlog to pilot in three weeks.
SPEAKER_04:Building the knowledge ecosystem.
SPEAKER_00:As the learning-centered approach took root, Freya realized they needed better systems for capturing and mobilizing knowledge across the organization. Learning that stayed within individual teams or projects wasn't fully leveraged. She worked with the IT and knowledge management teams to establish what they called the Innovation Learning Library, a searchable repository of insights, experiments, and evidence from across the portfolio. Unlike traditional project documentation that focused on what was built, this library captured what was learned. Each entry included the core hypothesis that was tested, the experiment design and methodology, the evidence collected, both qualitative and quantitative, the insights derived and their confidence level, the implications for related work, the team members involved who could provide further context. The library was structured to make pattern recognition possible across domains. Teams could search for insights related to specific customer segments, technologies, business models, or market conditions, seeing what others had already discovered rather than recreating experiments. This knowledge ecosystem transformed how innovation knowledge flowed. Instead of linear documentation that was rarely consulted, the library became a living resource that teams actively used when planning new initiatives or facing familiar challenges. The quality of entries was maintained through a peer review process, not to police content, but to ensure clarity and usefulness. Teams received recognition for contributions that were frequently referenced by others, creating positive incentives for knowledge sharing. We're not just learning faster, Freya told her team, we're learning cumulatively. AI enhanced decision support for dynamic resourcing. The dynamic resourcing approach required more sophisticated decision support than traditional budgeting processes. To enable informed reallocation decisions, Freya and Axel developed AI enhanced tools that provided deeper insights into portfolio dynamics. Dynamic resourcing doesn't mean impulsive or arbitrary changes, Freya explained to the governance board. It means making more informed adjustments based on real-time learning signals. The AI enhancement took several forms. Portfolio simulation allowed decision makers to model different allocation scenarios showing how shifts in resources might affect overall portfolio performance, balance, and risk profile. Before making actual reallocations, teams could test multiple approaches and understand their likely consequences. Capability mapping identified how specific talent and resources mapped to initiative needs, making it easier to see where redeployment made strategic sense. The system could suggest optimal resource shifts based on both initiative momentum and capability alignment. Learning trajectory. Prediction analyzed early signals to forecast how initiatives were likely to evolve, helping identify emerging opportunities or challenges before they became obvious. This predictive capability enabled more proactive resource adjustments rather than reactive responses to already visible trends. Dependency analysis mapped connections between initiatives, showing how resource shift might create ripple effects across the portfolio. This prevented unintended consequences where strengthening one initiative might unintentionally weaken others dependent on the same resources or capabilities. The AI doesn't make the reallocation decisions, Axel emphasized. It provides the context and insights that help humans make better decisions. This approach transformed resource allocation from a periodic, high level process to a continuous, evidence-based practice. Resources flowed to where they could create the most value based on actual learning signals rather than initial projections or political considerations.
SPEAKER_04:The Open Talent Revolution.
SPEAKER_00:The liquid layer approach to talent proved so effective that it gradually expanded beyond the initial 15% dynamic budget allocation. Teams began integrating open talent approaches into their core work processes, not just as emergency capacity but as a strategic capability. This shift required new skills and mindsets. Innovation teams had to learn how to frame challenges effectively for external solvers, design appropriate reward structures for different problem types, evaluate solutions from diverse sources without bias, integrate external contributions into internal workflows and build on external solutions rather than reinventing them. Freya worked with HR to develop training programs for these skills, positioning open innovation as a core competency rather than a specialized technique. They created playbooks for different open talent approaches, from prize competitions to expert sourcing to startup collaborations. The open talent approach yielded benefits beyond just capacity flexibility. It brought diverse perspectives that challenged internal assumptions, specialized expertise that would have been impractical to maintain in-house, and acceleration effects that compressed learning cycles. One particularly striking example came when the team was developing a sustainable packaging concept. After weeks of internal work yielded only incremental improvements, they launched a solver challenge through Innocentive. Within three weeks they received a solution from a material scientist in Japan who had been working on similar challenges in an entirely different industry. The approach wasn't just better, it was fundamentally different from anything they had considered internally. The future of innovation isn't just about being good at creating solutions, Freya observed. It's about being good at finding them wherever they exist. Addressing resistance to dynamic resourcing. While the benefits of dynamic resourcing were clear, Freya also anticipated and addressed resistance from teams who might feel threatened by resource reallocation. She implemented several approaches to ease the transition. One emphasize learning, not judgment. Resources didn't shift away because teams were failing, but because the system was optimizing for learning and value too. Create transition support. Teams losing resources received help in documenting and transitioning their learnings. three. Recognize good kill decisions. Teams that identified their own declining momentum and recommended reallocation received recognition four. Provide career continuity. Team members from deprioritized initiatives were given priority for roles on high momentum projects. Dynamic resourcing isn't about winners and losers, Freya explained to managers. It's about creating a system where everyone contributes to the organization's learning, sometimes by building momentum and sometimes by recognizing when to redirect. This approach helped overcome much of the political resistance that typically undermines resource flexibility. Tracking the health of the portfolio, Axel built a prototype dashboard. He knew it needed work, but it was a start. It didn't show how many ideas were active. It showed how many were moving. Five signals became the team's health indicators. One. Exvear Delta Trend. Are ideas gaining or losing momentum? two. Confidence growth rates. Are we learning fast enough? three. Kill velocity. Are we exiting low value work quickly and cleanly? four. Time sensitivity shifts. Are we spotting urgency early? five. Resource fluidity. Are people and funds moving where they're most needed? These weren't just innovation metrics, they were leadership signals. When David saw them he didn't ask for ROI projections. He asked where are we underweight? That's when Freya knew the culture was shifting. The dashboard wasn't just a reporting tool, it was a strategic compass that guided portfolio decisions. It provided answers to the most critical questions facing innovation leaders, such as Are we getting better at identifying and backing winning ideas? Are we learning efficiently or wasting resources on paths with diminishing returns? Are we responding to market signals quickly enough? Are we balancing our portfolio across horizons, domains, and risk levels? These were questions that traditional innovation metrics couldn't address. Activity metrics like the number of ideas generated or pilots launched revealed nothing about learning quality or resource optimization. Even output metrics like successful implementations or revenue generated were lagging indicators that came too late to influence decisions. The health indicators provided leading signals, early warnings when the innovation system was drifting off course, and confirmation when it was generating momentum. Perhaps most importantly, these indicators didn't just measure individual ideas, they measured the system's capacity to learn and adapt. They revealed whether the organization was getting better at innovation over time, not just whether specific initiatives succeeded or failed. Accelerating the path to realized value. The learning loops and dynamic resourcing approach had a profound impact on how quickly innovation translated into realized value. By making learning visible and channeling resources to high momentum opportunities, the organization dramatically compressed the time from initial concept to measurable impact. Learning isn't separate from value creation. It's the engine that drives it, Freya explained to the executive team. When we get better at learning, we get better at turning expected value into realized value. This connection between learning and value realization manifested in several ways. Faster kill. Decisions meant resources weren't consumed by initiatives unlikely to deliver value. The average time to kill a low potential initiative decreased from nine months to less than two, freeing significant resources for higher potential opportunities. Accelerated scaling for high confidence initiatives meant value was realized sooner. The dynamic resourcing approach allowed promising ideas to receive additional resources without waiting for annual budget cycles, sometimes cutting months from implementation timelines. Higher success rates resulted from better informed decisions. As learning quality improved, the percentage of initiatives that delivered on or above their expected value increased substantially. Value preservation through earlier course corrections prevented value erosion in otherwise promising initiatives. The ability to detect and address implementation challenges early meant fewer initiatives delivered below their potential. This direct connection to realized value would become even more important as the organization matured its approach to value creation, as we'll explore in chapter 15. The learning systems established here created the foundation for systematic value delivery, transforming how the company converted potential into impact. In traditional innovation approaches, value realization is treated as a separate phase that happens after learning, Freyer observed. In our system, learning and value realization are continuously connected through dynamic resource allocation. This perspective transformed how the organization thought about the innovation timeline, no longer seeing a linear progression from learning to value, but a continuous cycle where learning accelerates value and realized value creates opportunities for new learning.
SPEAKER_04:The renewal mechanism.
SPEAKER_00:As the learning focused approach matured, Freya recognized the need for systematic renewal of the innovation system itself. Even effective frameworks and processes could become rigid or outdated as the organization and its environment evolved. She established what the team called system learning reviews, quarterly sessions focused not on individual initiatives, but on the innovation system's own performance. These reviews examined which aspects of the XV model were proving most predictive of actual outcomes, whether fit assessments were accurately reflecting strategic fit, how well governance processes were balancing speed and alignment, whether learning signals were leading to appropriate resource decisions, how effectively knowledge was being captured and reused. These reviews weren't theoretical discussions, they led to concrete adjustments to the innovation system, refinements to scoring models, updates to governance thresholds, improvements to knowledge sharing processes, and shifts in resource allocation approaches. For example, after analyzing six months of data, the team discovered that certain types of confidence assessments were more predictive than others. Technical feasibility confidence was generally accurate, while market adoption confidence tended to be overly optimistic. This insight led to recalibrated confidence scoring guidelines that improved the reliability of XV calculations. Similarly, they found that governance friction typically occurred at specific transition points, particularly when initiatives moved from exploration to scaling. This prompted the creation of specialized transition protocols that addressed common stakeholder concerns before they became obstacles. We're applying the same learning discipline to our system that we apply to our ideas, Freya explained to her team. We don't expect to get it perfect, we expect to get better over time. This commitment to system level learning prevented the innovation approach from calcifying into rigid orthodoxy. It remained alive, adaptive, and increasingly effective.
SPEAKER_04:Learning as a strategic advantage.
SPEAKER_00:In the end, the system Freya built wasn't just designed to prioritize the right ideas. It was designed to learn across time, teams and decisions. Every failure was a feedback loop. Every surprise became a signal. Every reallocation reflected what they now knew. And this wasn't just operational hygiene, it was strategic advantage. Because most organizations don't suffer from a lack of ideas, they suffer from a lack of learning. Freya's team didn't just know what was happening, they knew how fast they were improving. That was the real asset, that was the real return. The transformation was perhaps best captured in a conversation Freya had with David nearly a year after implementing the learning centered approach. They were reviewing the quarterly portfolio performance when David made an observation that would have been unimaginable in their earlier interactions. You know, he said, studying the learning velocity trends across three key initiatives, what impresses me isn't that some of these ideas are working. It's that you know why they're working, and you're applying those insights to make other ideas work better. Freya nodded. That's exactly the point. We're not just trying to get lucky with a few big wins, we're building a system that gets systematically better at creating value. David leaned back in his chair. That's the kind of innovation I can invest in. Not promises about specific ideas, but evidence that the system The system itself is learning and improving. That was the ultimate shift, from innovation as a collection of bets to innovation as a learning engine that continuously improved its ability to create value. In a world of accelerating change and increasing uncertainty, the capacity to learn faster than competitors wasn't just an operational advantage, it was the only sustainable advantage. Key concept from static to dynamic The power of learning velocity. Learning velocity transforms how we evaluate innovation progress. Rather than measuring activity completion or simply tracking current status, it focuses on the rate at which an initiative generates validated learning that increases confidence or redirects effort. Why learning velocity matters? Traditional innovation metrics create a dangerous illusion. They mistake motion for progress. Teams appear busy launching pilots, running workshops, and generating ideas, but these activities don't necessarily translate to meaningful learning or value creation. Learning velocity shifts the focus from what teams are doing to what they're learning, measuring how quickly they convert uncertainty into knowledge and knowledge into value. This creates four powerful advantages. One earlier signal detection. Learning velocity reveals which initiatives are genuinely gaining momentum versus those that appear active but aren't generating insights. This allows for earlier intervention or reallocation. two resource optimization by directing resources to where learning is happening fastest, organizations can accelerate their overall innovation performance without increasing total investment. three Better Kill decisions When learning velocity stalls despite continued investment, it provides an objective trigger for reassessment, making emotionally difficult kill decisions more rational and timely. four cultural reinforcement measuring and celebrating learning rates rather than just launches or completions creates cultural incentives for honest assessment and rapid adaptation. The Learning Velocity Index LVI The Learning Velocity Index combines four metrics to create a comprehensive measure of learning effectiveness as follows one confidence movement rate, how quickly confidence scores are changing, positively or negatively, based on new evidence. Two assumption validation How many key assumptions have been tested against reality regardless of outcome three pivot quality how significantly the concept has evolved based on learning measured by the magnitude of changes to the core value proposition four resource efficiency learning generated per unit of investment, ensuring that teams aren't simply buying insights through excessive spending. An initiative with high LVI is efficiently converting resources into actionable insights, regardless of whether those insights confirm or contradict the original hypothesis. Both validation and invalidation create value when they generate clear signal. Dynamic resourcing acting on learning signals. Learning velocity creates the foundation for dynamic resourcing, the reallocation of innovation resources, people, budget, attention, based on learning signals rather than fixed plans. Unlike traditional annual budgeting that locks resources into specific initiatives regardless of performance, dynamic resourcing creates a responsive system where initiatives showing accelerating learning receive amplified resources. Stalled initiatives trigger intervention or reallocation. Invalidated hypotheses lead to quick, respectful termination, and unexpected opportunities can be seized without waiting for planning cycles. Even a modest, dynamic resource pool, typically 15 to 25% of the total innovation budget, creates significant performance advantages by ensuring resources flow to where they can create the most value based on emerging evidence. By measuring how fast teams learn rather than how busy they appear, learning velocity transforms innovation from a static planning exercise to a dynamic, adaptive system that continuously optimizes for maximum impact.
SPEAKER_04:Too long.
SPEAKER_00:By treating learning as a strategic capability rather than an activity, the organization transforms how it converts expected value into realized impact.