The Famous "Psychological Safety" Idea
Why Amy Edmondson’s Research on Psychological Safety Explains the Difference Between Organisations That Learn From AI and Organisations That Merely Adopt It
Psychological safety is the most widely cited and least widely practised idea in contemporary management. It has become a decorative term, something many leaders invoke in all-hands meetings and ignore in every interaction that actually matters. This is unfortunate, because beneath the corporate dilution lies one of the most rigorously tested findings in organisational research.
The single best predictor of whether a team learns is not its talent, its resources, or its strategy, it is whether people on that team believe they can speak honestly without being punished for it.
Amy Edmondson, a professor at Harvard Business School, has spent three decades producing the evidence for this claim. Her research spans hospitals, technology firms, manufacturing plants, and financial services, and the finding is consistent across all of them.
The teams that learn fastest and perform best are not the ones with the fewest errors. They are the ones that report the most errors.
Not because they make more mistakes, but because they operate in an environment where mistakes surface, get examined, and produce improvement rather than being hidden, explained away, or blamed on individuals.
1. What Psychological Safety Actually Is — And What It Is Not
Edmondson defines psychological safety as the shared belief that the team is safe for interpersonal risk-taking. The key words are shared, belief, and interpersonal risk. It is not an individual trait, it is a property of the group’s interaction patterns. It is not about objective conditions either, it is about what people believe about those conditions. And the risks it addresses are not physical or financial but social: the risk of looking ignorant, incompetent, negative, or disruptive.
This matters because every act of learning in an organisational context carries interpersonal risk. Asking a question risks looking ignorant. Admitting a mistake risks looking incompetent. Challenging a decision risks looking negative. Proposing an idea risks looking foolish. In an environment where any of these risks carries a real cost such as a dismissive response, a note in the performance review, a subtle exclusion from future conversations, the rational response is to stay silent. And silence, in Edmondson’s research, is the primary mechanism by which organisations prevent themselves from learning.
Chris Argyris identified the same mechanism from a different angle. His defensive routines - the skilled ways that smart people avoid having their reasoning examined - are the behavioural expression of low psychological safety. In Argyris’s terms, people engage in Model I behaviour (unilateral control, suppress negative feelings, be rational, win) because the environment has taught them that Model II behaviour (joint control, surface feelings, test assumptions publicly) is dangerous. Edmondson provides the environmental variable that Argyris’s framework implies but does not name. Argyris tells us what people do in unsafe environments. Edmondson tells us why they do it and, crucially, what conditions would need to change for them to stop.
The distinction from comfort is essential and frequently missed. Psychological safety is not the absence of tension, neither is it the avoidance of conflict. It is not an agreement to be gentle with each other. It is the presence of trust that tension, conflict, and honesty will be treated as contributions to learning rather than as grounds for punishment. For Edmondson, a team where everyone agrees, no one challenges, and where difficult topics are avoided is not psychologically safe. It is psychologically comfortable, and comfort in her framework, is a failure mode, not a success condition.
2. The Two-by-Two That Changes Everything: Safety and Standards
Edmondson’s most important finding and the one most consistently omitted from popular accounts is that psychological safety alone does not predict performance. What predicts performance is the combination of psychological safety and high standards. This produces a two-by-two matrix (shown above) that should be printed and placed on the desk of every leader managing a transformation:
Low safety, low standards: the apathy zone. Nobody cares, nobody tries, nobody challenges. The organisation drifts.
Low safety, high standards: the anxiety zone. People are under pressure to perform but cannot admit difficulty, ask for help, or report problems. This produces burnout, hidden errors, and defensive routines. It is the zone that Chris Argyris spent his career documenting.
High safety, low standards: the comfort zone. People feel safe but are not challenged. Conversations are pleasant. Nobody pushes back. Nothing improves. This is where most organisations land when they attempt to “build psychological safety” without simultaneously raising expectations.
High safety, high standards: the learning zone. People feel safe enough to take risks and are challenged enough that they must. Mistakes are surfaced and examined. Feedback is honest and frequent. This is where organisational learning happens and it is the only zone where AI transformation has any chance of producing genuine capability change rather than cosmetic adoption.
An organisation that creates safety without raising standards will produce teams that feel comfortable about not learning AI or to not really use AI to challenge how things have ‘always’ been done. An organisation that raises standards without creating safety will produce the anxiety zone: people performing adoption while hiding their confusion, their failures, and their workarounds. Only the combination produces learning.
This is where Seligman and Deci and Ryan (discussed elsewhere) connect. Seligman’s learned helplessness is the psychological state produced by prolonged residence in the anxiety zone: high pressure, no safety, repeated failure with no path to improvement. People stop trying because trying has been shown to produce only exposure to blame. Deci and Ryan’s self-determination theory explains what the learning zone feels like from the inside: autonomy (choice in how to engage with the challenge), competence (the experience of growing mastery as difficulties are overcome), and relatedness (the social connection that comes from learning together in a trusting team). Edmondson’s framework provides the environmental architecture that either enables or prevents these conditions from being satisfied.
3. The Leader’s Shadow: Why Safety Starts at the Top and Dies There
Edmondson’s research reveals an uncomfortable finding about where psychological safety comes from: it is overwhelmingly determined by the behaviour of the team leader. Not by policies. Not by values statements. Not by training programmes. By the specific, observable behaviours of the person with the most power in the room.
Three leader behaviours predict team-level psychological safety more reliably than any other variable:
Framing the work as a learning problem, not an execution problem. When the leader says “We are implementing X” (execution framing) the implicit message is that the path is known and the team’s job is to walk it. Questions become delays. Confusion becomes incompetence. But when the leader says “We are learning how X changes our work, and nobody including me knows exactly what that will look like” (learning framing), questions become contributions and confusion becomes shared terrain.
Modelling fallibility. When the leader admits uncertainty, acknowledges mistakes, or asks for help, it sends a signal that is disproportionate to the act itself. It does more for psychological safety than a hundred posters about “embracing failure.” Stacey would say this is a gesture or a specific act by a specific person that changes the conditions for the next conversation. The power asymmetry of the leader’s position amplifies the gesture’s effect. When the most powerful person in the room admits they do not know something, it becomes safe for everyone else to do the same.
Responding to voice with engagement, not punishment. This is the behavioural test that separates espoused safety from actual safety. When someone raises a problem, challenges a decision, or reports a failure, what happens next? If the response is engagement (”Tell me more,” “What do you think we should do?”), safety is reinforced. If the response is deflection (”Let’s take that offline”), dismissal (”That’s not really the issue”), or punishment (being excluded from the next meeting), safety is destroyed. And it takes only one instance of punishment to undo months of espoused commitment to openness.
Carol Dweck’s mindset research illuminates why leader behaviour is so decisive.
In a fixed mindset culture, the leader who admits not knowing something is violating the implicit contract: leaders are supposed to know.
Their authority rests on expertise, and admitting uncertainty undermines it. This is very pernicious in technology domains. In a growth mindset culture, the same admission is leadership. It models the learning orientation that the transformation requires. Edmondson’s research provides the empirical evidence for what Dweck’s theory predicts: the leader’s willingness to be visibly fallible is the most powerful signal the organisation receives about whether learning is genuinely valued or merely discussed.
4. Dekker’s Contribution: From Safety in Learning to Safety in Failure
Sidney Dekker’s work on Just Culture extends Edmondson’s framework into the specific domain of how organisations respond to things going wrong. Where Edmondson addresses the conditions for speaking up, Dekker addresses the conditions for honest reporting after failure, and the two are inseparable, because an organisation that cannot learn from failure cannot learn at all.
Dekker identifies the central tension: learning from failure requires honest accounts of what happened, but accountability cultures make honest accounts dangerous. If reporting an error leads to blame, people will not report errors. The substitution test - “Would another similarly trained person in the same circumstances have done the same thing?” - is Dekker’s most practical contribution. If the answer is yes, the issue is systemic, not individual. Sanctioning the individual does not address the conditions that produced the behaviour. It merely ensures that the next person in the same conditions will hide the same error.
For AI transformation, this is directly operational. When a team deploys AI-generated code that contains an error, the substitution test asks: given the specification they wrote, the tools they had, the training they received, and the time pressure they were under, would another team have produced the same error? If yes, the problem is not the team. It is the specification process, the validation framework, or the organisational conditions that made careful review impractical. Dekker’s framework redirects attention from “who failed?” to “what conditions produced this failure?” and that redirection is precisely the shift from single-loop to double-loop learning that Argyris described.
Dekker also introduces a concept that reframes how we think about safety itself. Conventional thinking defines safety negatively as the absence of accidents, incidents, and harm. Dekker, drawing on Erik Hollnagel’s Safety-II framework, argues for understanding safety as a positive presence: the presence of capacities, competencies, and adaptations that create success. Safety-I asks: “How do we prevent things from going wrong?” Safety-II asks: “How do we ensure that things go right?” The distinction matters because the same human variability that produces errors also produces adaptation, creativity, and recovery. Organisations that try to eliminate variability in pursuit of safety end up eliminating the adaptive capacity that creates safety.
5. Westrum’s Typology: The Culture That Safety Produces
Ron Westrum’s typology of organisational cultures provides the macro-level frame within which Edmondson’s team-level findings and Dekker’s incident-level practices operate. Westrum identified three cultural types based on how information flows through the organisation:
In pathological cultures, information is a weapon. Messengers are shot. Failure is covered up. Novelty is crushed. Responsibilities are shirked. The driving dynamic is power: who has it, who keeps it, who uses it against whom.
In bureaucratic cultures, information is channelled. Messengers are tolerated. Failure leads to process. Novelty creates procedural problems. Responsibilities are compartmentalised. The driving dynamic is rule-following: everything must go through proper channels.
In generative cultures, information flows freely. Messengers are trained and rewarded. Failure leads to inquiry. Novelty is welcomed. Responsibilities are shared. The driving dynamic is mission: what serves the collective purpose?
The connection to Edmondson is structural. Psychological safety is the team-level manifestation of what Westrum describes at the organisational level. You cannot sustain psychological safety in a team embedded in a pathological culture, because the pathological dynamics will eventually override the team leader’s best efforts. And you cannot build a generative culture without psychological safety at the team level, because generative information flow requires precisely the interpersonal risk-taking that safety enables.
Karl Weick’s work on High Reliability Organisations (HRO’s) provides the operational case study. HROs like aircraft carriers and fire crews, succeed not because they have better plans but because they have a preoccupation with failure combined with deference to expertise. When something goes wrong, authority flows to the person with the most relevant knowledge, regardless of rank. This deference is only possible in a psychologically safe environment because it requires the leader to admit that the subordinate knows something the leader does not, and it requires the subordinate to believe that speaking up to the leader is safe.
For AI transformation, the Westrum typology provides a diagnostic that predicts adoption outcomes. When a team discovers that AI-generated code produced an error with serious consequences, what happens? In a pathological culture, the error becomes evidence that AI is dangerous and should be restricted, or it becomes ammunition against whoever championed the AI code. In a bureaucratic culture, the error triggers a new governance requirement; yet another gate, another review, another layer of approval. In a generative culture, the error triggers curiosity; what does this tell us about our specification, process etc? What did we fail to make explicit? How do we improve the feedback loop so this category of error is caught earlier?
The response to the anomaly reveals the culture. And the culture, far more than the technology, determines whether the organisation learns from AI or merely reacts to it.
6. The Structural Trap: Why Safety Programmes Fail
The most common response to Edmondson’s research is a programme: “We will build psychological safety.” This response, while well-intentioned, carries a structural contradiction that Anthony Giddens would immediately recognise.
Giddens’ structuration theory holds that structures - the rules, resources, and norms that shape organisational behaviour - are not external things imposed on people. They are reproduced in daily practice by the very people who are constrained by them. The performance review that penalises admitting mistakes reproduces a low-safety environment every time it is conducted.
A “psychological safety programme” that does not change these structural mechanisms is, in Giddens’ terms, an exercise in changing signification (the story about what is valued) without changing domination (who controls resources and decisions) or legitimation (what counts as right and proper). It produces a new espoused theory - “We value openness” - while the theory-in-use remains unchanged. Argyris would call this a textbook example of the gap between what organisations say and what they do, and he would note, that the gap itself is undiscussable.
Tom Peters provides a practical test. If you want to know whether psychological safety is real, do not read the values poster. Walk the floor. Watch what happens when someone challenges a decision in a meeting. Watch what happens when a team reports that the governance framework is preventing them from learning. If the response is genuine engagement - curiosity, follow-up, action - safety is real. If the response is any form of deflection, dismissal, or delayed retaliation; safety is a performance.
Seligman’s explanatory style framework applies here too. In organisations where safety is performed but not practised, people learn a very specific lesson: the stated rules are not the actual rules. The stated rule is “we welcome honest feedback.” The actual rule is “we welcome honest feedback that does not challenge anyone with more power than you.” This produces a form of learned helplessness that is particularly resistant to intervention.
The only way to break this cycle, as Deci and Ryan would predict, is through autonomy-supportive action rather than controlled mandate. Safety cannot be installed. It must be demonstrated in specific moments, by specific people, with specific consequences that contradict the learned model. The leader who, in a public meeting, changes a decision based on a junior person’s challenge has done more for psychological safety than a year of workshops. These are Karl Weick’s small wins applied to the safety domain: concrete, visible acts that change what people believe about the consequences of speaking up.
(An Organisational Prompt is something you can do now…)
Organisational Prompt
At your next team meeting try this. Before any agenda item, say: “I want to start by asking a question I genuinely do not know the answer to.” Then ask it. Make it real: “What about our current approach is not working, and why hasn’t anyone raised it?” Then be silent. Wait. The silence will be uncomfortable. That is the point.
What happens in the next sixty seconds will tell you everything about your team’s psychological safety. If someone speaks honestly, specifically, with something that surprises you, you have safety. Respond with curiosity, not correction. Ask a follow-up question. Take a note. Then, before the meeting ends, commit to one specific action based on what you heard.
Further Reading
Amy Edmondson: The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth - The definitive statement of Edmondson’s research for a practitioner audience.
Sidney Dekker: Just Culture: Restoring Trust and Accountability in Your Organization - An essential companion to Edmondson in my view. Where Edmondson addresses the conditions for speaking up, Dekker addresses the conditions for honest reporting after failure.
Ron Westrum: A Typology of Organisational Cultures - The paper that launched a thousand DevOps presentations. Read it for the typology (pathological, bureaucratic, generative) and the insight that information flow is the primary determinant of organisational performance.
Chris Argyris: Teaching Smart People How to Learn - The mechanism beneath Edmondson’s environmental finding. Edmondson provides the environmental condition and Argyris provides the individual psychology.
Disclaimer
I write about the industry and its approach in general. None of the opinions or examples in my articles necessarily relate to present or past employers. I draw on conversations with many practitioners and all views are my own.



