Dave Snowden, Cynefin, and the Table Napkin Test
Why Dave Snowden’s Cynefin Framework Is the First Thing You Need Before You Touch AI
You cannot manage a complex system as if it were a complicated machine. That sentence sounds like a truism until you watch an enterprise spend eighteen months and several million pounds building a three-year AI transformation roadmap, complete with milestones, tollgates, and a governance framework so heavy it could anchor a ship. Six months in, the roadmap is fiction. The tollgates are rituals. And the people doing the work have quietly figured out their own way of using AI, none of which appears in the plan.
Dave Snowden, the creator of the Cynefin (kun-ev-in) framework, has spent decades explaining why this keeps happening. His central argument is not that planning is bad. It is that leaders consistently misclassify the nature of the challenge they face, and then apply the wrong response logic. They treat adaptive, emergent, deeply human problems as if they were engineering problems with discoverable solutions. And the confidence with which they do this is, itself, part of the problem.
I attended a course with Dave on an early iteration of this thinking, over twenty years ago in South Africa, when Cynefin was still being born. What follows is a brief overview of some core ideas and their direct relevance to anyone leading organisational change today.
Snowden’s body of work extends well beyond what one article can cover: anthro-complexity, constraint-based management, and the evolving Cynefin framework itself (which now includes additional domains and dynamics). This article focuses on the ideas most directly applicable to transformation leadership: domain classification, probe-sense-respond, constraint management, retrospective coherence, and distributed sensing. Readers wanting the full picture should start with the Cynefin Company’s resources at thecynefin.co.
1. The Domains: Why the Wrong Response Logic Guarantees Failure
The most consequential mistake in transformation leadership is not choosing the wrong strategy. It is misclassifying the kind of problem you are dealing with, and then applying a response pattern that cannot work. Cynefin distinguishes between domains based on the relationship between cause and effect. Two matter most here.
In the Complicated domain, cause and effect are discoverable through analysis. A jet engine fails; you call an expert, diagnose the fault, apply good practice. The knowledge exists, you just need someone qualified to find it. The response logic is Sense-Analyse-Respond. This is the realm of traditional engineering, logistics, and much of enterprise IT.
In the Complex domain, cause and effect are only coherent in hindsight. No amount of analysis can predict the outcome of an intervention, because the system changes as you interact with it. Cultural transformation, market disruption, organisational learning, and innovation all live here. The response logic is fundamentally different: Probe-Sense-Respond.
Leaders routinely treat complex challenges, like changing how an organisation builds software, as if they were complicated ones, like planning a delivery route. This distinction mirrors Heifetz’s separation of technical problems (amenable to expert solutions) and adaptive challenges (requiring changes in values, beliefs, and ways of working). Snowden provides the navigational instrument to identify which is which.
Kahneman’s System 1 and System 2 thinking explains why this misclassification is so persistent. System 1, fast, pattern-matching, confident, sees an AI adoption challenge and immediately categorises it as a technology problem, because technology problems are familiar. System 2, slow, analytical, high-effort, might recognise the complexity, but System 2 has a limited attention budget. In the time-pressured, fragmented reality of managerial work that Mintzberg documented so vividly, System 1 nearly always wins. Snowden’s framework is, in effect, a tool for forcing System 2 engagement at the moment when it matters most.
2. Probe-Sense-Respond: The Portfolio, Not the Roadmap
In a complex domain, the Analyse-Plan-Execute cycle that most transformations rely on is structurally incapable of succeeding. Not because analysis is bad, but because the system cannot be understood in advance of interacting with it.
Snowden advocates instead for Probe-Sense-Respond: launch multiple, small, safe-to-fail experiments designed to generate information, not achieve predetermined outcomes. Observe how the system responds. Then amplify what works and dampen what does not.
The critical distinction: probing is not piloting. A pilot tests a predetermined solution in a controlled environment. A probe explores an uncertain space to discover what might work. Pilots ask “does this solution scale?” Probes ask “what happens when we try this?” If you are running pilots, you have already decided what the answer is. You are just checking whether the organisation can absorb it. That is a complicated-domain response to what may be a complex-domain problem.
This reframes what a transformation strategy should be. Not a linear document with milestones and delivery dates (though you may need elements of that for certain internal processes). It should be a portfolio of coherent experiments, each designed to be survivable if it fails and informative whether it succeeds or not. The strategy emerges from the pattern of what works, which is precisely what Mintzberg means by emergent strategy. Snowden provides the operational mechanism for how that emergence happens in practice.
3. Constraints, Not Goals
In traditional change management, you set a specific goal (”Increase AI adoption by 40%”) and drive people toward it. Snowden argues that in complex systems, goal-setting produces perverse incentives, gaming, and Goodhart’s Law in full bloom: when the measure becomes the target, it ceases to be a good measure. Beer would recognise the pattern instantly: the purpose of the system is what it does, and what an adoption-target system does is produce adoption numbers, not changed practice.
Instead of managing outputs, you manage constraints within which behaviour emerges.
Governing constraints are rigid rules that limit possibilities: “All AI-generated code must pass automated validation before deployment.” These reduce novelty but ensure compliance. They belong in the Clear domain.
Enabling constraints create boundaries within which creative behaviour can emerge: “Teams must include a domain expert in every specification review, and they jointly determine the success criteria.” These encourage interaction and adaptation while maintaining coherence.
The design question shifts from “What should people do?” to “What constraints would make the desired behaviour more likely than the old behaviour?” You are not engineering an outcome. You are shaping a possibility space. This connects directly to Dekker’s concept of Safety-II: studying what goes right, not just what goes wrong, because the same enabling constraints that guide successful outcomes also create the conditions for adaptation when things do not go as expected.
4. Retrospective Coherence and the Danger of “Best Practices”
The consultants arrive. They bring case studies. Lots of PowerPoint (shudder). They tell you about the Spotify Model, how Google innovates, how Company X transformed. They present these as clear success stories with bold vision, disciplined execution, and predictable results.
Snowden calls this retrospective coherence: the human compulsion to look back at a successful outcome and connect the dots into a causal narrative, underplaying the hundreds of failed experiments, lucky accidents, and abandoned strategies that actually produced the result. This is Kahneman’s narrative fallacy and WYSIATI (What You See Is All There Is) in operation. Our brains crave coherent stories, so we invent linear causality where none exists, package it as “best practice,” and sell it to organisations whose context is entirely different.
Snowden is clear about where best practice is legitimate: in the Clear domain, where tasks are simple and repeatable. For anything in the Complicated or Complex domains, best practice is at best misleading and at worst dangerous. Copying another company’s AI adoption model, their governance framework, their team structure, their toolchain, is dangerous precisely because the conditions that made those things work are invisible in the success narrative. The rituals can be reproduced. The context cannot. Bourdieu would add that what gets reproduced is the visible structure, not the habitus that made it meaningful; the form without the practical knowledge that gave it life.
5. Distributed Sensing: Finding the Weak Signals
Executives are often the last to know the truth. Information is filtered and sanitised as it moves up the hierarchy. By the time a signal reaches a quarterly review, it has been translated through three layers of management, stripped of context, and fitted into whatever narrative the current strategy requires.
Snowden advocates for “Human Sensor Networks”: using the entire workforce to provide real-time data on what is actually happening. The application to AI adoption is direct. The people who know whether AI is actually changing how work gets done are the people doing the work. The developers using (or not using) AI coding assistants. The domain experts writing (or struggling with) specifications. The team leads watching the dynamics shift (or not). These stories contain the weak signals of success and failure that no adoption metric can capture.
You still need metrics; you have a corporate communication infrastructure to feed. But do not confuse output metrics with the means by which you actually guide your initiative. Dekker’s concept of drift into failure is the warning: systems drift toward accidents through small, incremental, locally rational steps. By the time a dashboard metric turns red, the drift is often irreversible. Narrative sensing catches the drift early, when it can still be addressed. It also enables what Westrum calls a generative culture: one where information flows freely because the messenger is trained rather than shot.
6. Snowden and His Limits
Snowden provides something rare: a framework that is both intellectually rigorous and immediately diagnostic. But it should be read with its tensions visible.
Stacey would push back on the framework itself: the act of classifying a situation into a domain risks creating the very false certainty that Snowden warns against. If complexity means that cause and effect are only visible in retrospect, who decides, in real time, that a situation is complex rather than complicated? The classifier is inside the system they are classifying. Snowden acknowledges this; the framework has evolved to include dynamics of movement between domains. But the tension remains.
Anthony Giddens would note that the constraints Snowden recommends are themselves structures that will be reproduced, reinterpreted, and potentially subverted by the agents who operate within them. An enabling constraint designed by leadership may be experienced as a governing constraint by the team, depending on how it is enacted in daily practice. The design of constraints is not the end of the work. It is the beginning of a structuration process that the designer does not fully control.
And Tom Peters would remind us that frameworks, however elegant, do not move people. Energy moves people. Snowden’s work is analytically brilliant; it gives you the right categories, the right response logics, the right diagnostic questions. What it does not always provide is the emotional charge that makes people care enough to act on the diagnosis. The table napkin test below is a start. But the leader who uses it must bring their own conviction.
(An Organisational Prompt is something you can do now....)
The Table Napkin Test
I borrow this one from Dave himself.
Take a napkin, or the back of an envelope, and draw how feedback from the people doing the work actually reaches the people making decisions in your organisation. Not the org chart. Not the governance framework. The actual path: who talks to whom, what gets filtered, what gets lost, what arrives too late.
Then ask: what could you do, this week, to shorten one of those paths by one step?
Further Reading
Dave Snowden and the Cynefin Company: thecynefin.co. The official home of Snowden’s work, including the evolving Cynefin framework, resources on anthro-complexity, and SenseMaker.
Cynthia Kurtz and Dave Snowden: The New Dynamics of Strategy: Sense-making in a Complex and Complicated World (IBM Systems Journal, Vol. 42, No. 3, 2003) - The foundational paper on Cynefin and its application to strategic decision-making. Freely available PDF.
Dave Snowden and Mary Boone: A Leader’s Framework for Decision Making (Harvard Business Review, 2007) - The Cynefin framework applied to leadership. The clearest short introduction.
I write about the industry and its approach in general. None of the opinions or examples in my articles necessarily relate to present or past employers. I draw on conversations with many practitioners and all views are my own.

