Innovation Playbook
A step-by-step implementation guide for running evidence-based innovation inside large organisations. Five phases, seven templates, one coherent system.
You lead innovation inside a large organisation. You might be a Head of Innovation, a VP of Product, a Design Director, or a transformation lead. You are responsible for making new things work inside a system designed to resist them.
This guide turns the full Innovation at the Core methodology into five actionable phases. Each phase introduces one or two templates, explains why they matter, and links to the printable PDF for immediate use.
Most corporate innovation programmes produce activity, not learning. Teams build prototypes nobody tested, showcases generate applause instead of decisions, and stakeholders lose trust after two or three cycles of impressive work that leads nowhere.
The difference between innovation theatre and real progress is discipline: name your assumptions before you build, test what is risky rather than comfortable, close the loop after every experiment, and accept negative results as evidence.
| Phase | Templates | Purpose |
|---|---|---|
| 1. Set Up | Innovation Canvas | Frame the learning journey |
| 2. Define | Assumption Map | Prioritise what to test by risk |
| 3. Test | Synthesis Ritual | Turn experiments into learning |
| 4. Evaluate | Confidence Dashboard, Quality Gate | Make evidence visible, protect review time |
| 5. Share | Open Studio, Crit Session | Produce decisions, not applause |
Built from 20 years leading innovation inside world-leading corporations. Not theory. Working tools that have driven measurable outcomes.
Set Up: The Innovation Canvas
Most innovation teams start with a solution. They build something, show it to someone, and call the feedback "validation." The Innovation Canvas prevents this by forcing the team to articulate what they believe, what they need to learn, and what evidence they have before building anything.
The canvas is not a form. It is a thinking tool. Teams that treat it as a form produce polished documents with no learning. Teams that treat it as a thinking tool produce messy, honest artefacts that drive real progress.
The canvas moves from left to right, from "what we believe" to "what we know" to "what we recommend."
Framing (Steps 1-3): set once, sharpen over time
- Strategic Context. What strategic bet does this initiative test? Why now? Two to three sentences a board member would recognise as relevant.
- Current State. What do we understand about the problem space from evidence, not opinion? A strong current state reads like a research brief, not a hypothesis.
- Hypothesis. "We believe [who] will [action] in [context] because [reason], and we will know when [outcome]." Specific enough to be wrong.
Evidence (Steps 4-6): the learning engine
- Assumption Map. Which assumptions carry the most risk? Plot on the 2x2 matrix, then test from the top-right corner first.
- Solution Design. The evolving solution. If it looks the same at week six as week one, nothing was learned.
- Weekly Synthesis. A timeline of synthesis entries. By week six, four to six entries showing the learning journey.
Assessment (Steps 7-8): the verdict
- Confidence Dashboard. Where is evidence strong? Where is it thin? A single view across seven dimensions.
- Recommendation. Continue, pivot, or stop. Based on evidence, not optimism.
At the end-of-cycle showcase, the canvas provides roughly 80% of the content. It is the backbone of every decision.
Innovation Canvas Template
8-step visual learning journey. Print and pin to a wall.
Common mistake: The Build-Test-Learn Trap
"Build" becomes "build a full prototype." "Test" becomes "show it to colleagues." "Learn" becomes "confirm what we already believed."
The canvas prevents this by requiring the team to articulate their hypothesis and extract assumptions before building anything. If you cannot answer "which assumption did this test?" the build is not connected to the learning agenda.
Define: The Assumption Map
Every hypothesis contains assumptions. Some are trivial, some are existential. Teams naturally gravitate toward comfortable experiments, testing assumptions where evidence is likely to be positive. The Assumption Map makes risk the organising principle of all experimentation.
The map is a 2x2 matrix. X-axis: uncertainty (how confident are we?). Y-axis: consequence (if wrong, how much damage?).
| Low Uncertainty | High Uncertainty | |
|---|---|---|
| High Consequence | Plan: fairly sure but stakes are high, so verify | Test First: highest priority, test immediately |
| Low Consequence | Park: low risk, address later | Monitor: uncertain but low stakes, keep an eye on it |
The discipline: test from the top-right corner first. Those are the assumptions that could kill the initiative.
- Specific: "80% of targeted users will complete the task within the first week" not "people will use it"
- Testable: "Unit economics are positive at 500 transactions per month" not "the business case works"
- Falsifiable: "Customers will choose the new process over the existing one" not "customers want better service"
- Connected: Each assumption maps to a confidence dimension (Desirability, Op. Feasibility, etc.)
Update the map weekly. New assumptions emerge as the team learns. Tested assumptions move to a "tested" state with a note about the result. Over time, the map becomes a record of the learning journey.
Assumption Map Template
2x2 risk matrix. Plot assumptions by uncertainty and consequence.
Common mistake: The Comfortable Experiment Trap
Teams interview users who are known to be enthusiastic. They test in favourable locations. They measure metrics where improvement is almost guaranteed. Meanwhile, the critical assumptions sit untested.
If every experiment produced a positive result, something is wrong. Ask the team to point to the top-right assumption and describe the experiment that tested it. If they cannot, the trap is active.
Test: The Synthesis Ritual
Running experiments is not the hard part. Synthesising what you learned is. Teams that run experiments without synthesising them accumulate activities, not insight. They can tell you what they did but struggle to tell you what it means.
The Synthesis Ritual closes this gap. Six questions after every learning cycle. The answers become the team's primary learning artefact and their status update in one document.
- What did we set out to learn? Connects the experiment to the assumption map. If you cannot name the assumption, the experiment was not connected to the learning agenda.
- What did we do? A factual account. Not a justification or sales pitch. What actually happened, who was involved, how long it took.
- What did we find? The evidence, not the interpretation. What the data, observations, or user behaviour actually showed. Separate fact from inference.
- What changed in our confidence? Did a dimension move from Assumed to Explored? From Explored to Validated? If nothing changed, the experiment was not designed to produce a meaningful signal.
- What does this mean for our hypothesis? Does it hold? Need refinement? Need to be abandoned? Confront the implications.
- What is the next most important thing to learn? Feeds directly back into the assumption map. The cycle closes.
Make it rhythmic. Weekly or fortnightly, same time, same format. The first few times feel effortful. By the fourth, the team completes it in thirty minutes. A synthesis where every answer is positive is suspicious. Real learning produces mixed signals.
Synthesis Ritual Template
6 questions that turn experiments into learning.
Common mistake: The Missing Hypothesis
Teams do excellent research: interviews, workflow observations, detailed personas. But there is no testable belief. Without a hypothesis, there is nothing to test. The team defaults to building something that "addresses the findings" rather than testing a specific bet.
The synthesis ritual forces the connection. If question one ("what did we set out to learn?") has no clear answer, the experiment was disconnected from the learning agenda.
Evaluate: Confidence Dashboard + Quality Gate
The dashboard tracks evidence across seven dimensions in a single view. It answers the question that matters most to sponsors: "How confident are we that this will work, and what is that confidence based on?"
| Dimension | The Question |
|---|---|
| Strategic Fit | Is this the right bet? |
| Current State | Do we understand the problem from evidence? |
| Desirability | Will people want this? |
| Op. Feasibility | Can it work in the real environment? |
| Tech. Feasibility | Can we build and sustain it? |
| Viability | Does the business case hold? |
| Adoption | Is there a path to scale? |
Each dimension is scored on four levels: Assumed (belief, no evidence), Explored (discovery-level experiments, what people say), Validated (observed working in real conditions), or Invalidated (evidence fundamentally compromises the hypothesis).
The critical distinction: Explored means people told you it would work. Validated means you watched it work. Many programmes stall at Explored and treat it as proof. It is not.
A team at Assumed across multiple dimensions after several learning cycles has a practice problem, not a progress problem.
Confidence Dashboard Template
7-dimension evidence tracker. The sponsor's single view.
The Quality Gate breaks the revision cycle by moving the checkpoint before submission. The team self-assesses against six criteria. If any criterion is not met, the work stays with the team.
- Benefit hypothesis: Does this deliverable have a measurable benefit hypothesis?
- Prior feedback: Have you addressed all feedback from the last review?
- Clear narrative: Is the core narrative clear: what problem, for whom, how?
- Real user testing: Have you tested this with at least one real user?
- Standalone clarity: Can someone outside your team understand this without explanation?
- Correct format: Is this in the correct template?
The gate is protection, not policing. Frame it as: "This exists so your effort counts." A "no" answer shows a specific instruction for what to fix. The gate redirects, it does not just reject.
Quality Gate Template
6-question self-assessment. Pass before submitting.
Common mistake: The Sunk Cost Showcase
Evidence is mixed. The honest recommendation would be to pivot or stop. But the team has invested significant effort. So the showcase becomes a highlight reel: polished prototype, positive quotes, invalidated assumptions mentioned in passing or omitted.
The dashboard prevents this. When the sponsor can see two dimensions Invalidated and three at Assumed, the evidence speaks for itself. Pre-agreed kill criteria remove the emotion from the decision.
Getting Started
Do not introduce all seven templates at once. That is the kind of process overload that makes teams abandon tools entirely. Spread deployment across three cycles.
Cycle 1: Foundation
Innovation Canvas + Synthesis Ritual. These establish the core disciplines: framing the learning journey and closing the loop after each experiment.
Cycle 2: Prioritisation
Add Assumption Map + Quality Gate. Formalise risk-based prioritisation and catch below-the-line work before it reaches the sponsor.
Cycle 3: Visibility
Add Confidence Dashboard, Crit Session, Open Studio. By now the team has two cycles of practice and is ready for the full system.
Week-by-week rhythm
Week 1: Frame. Weeks 2-4: Learn (experiments + synthesis). Week 5: Confidence review + final Crit. Week 6: Open Studio + decision.
1. Innovation Canvas
8-step visual learning journey
2. Quality Gate
6-question self-assessment checklist
3. Assumption Map
2x2 risk prioritisation matrix
4. Synthesis Ritual
6 questions, experiments to learning
5. Confidence Dashboard
7-dimension evidence tracker
6. Open Studio Format
35-minute showcase run sheet
7. Crit Session Format
60-minute peer feedback structure
Full Guide
The complete Innovation at the Core guide (80+ pages)
The tools in this playbook do not guarantee success. No tool can. What they guarantee is that failure happens early, cheaply, and with learning attached.