Decoding systems with Cynefin

Cynefin was developed by D. Snowden and Mary E. Bone and was the subject of a 2007 HBR publication entitled “A leader’s framework for decision making”.

In January 2011, I was fortunate to attend Dave Snowden’s seminar in Frankfurt, Germany, and although I was on a tiny committee, it changed my perception of the systems.

Before starting my explanation, it must be understood that there are two levels of perception of this framework:

          • A frozen level where we analyze a situation to decode it and initiate corrective actions.
          • A dynamic level where we look at the dynamics of the systems.

Cynefin is a Gaelic word for a place with multiple affiliations. In the context of system dynamics, it is about operating a transition from robustness to resilience.

The transfer of robustness towards resilience:

          • Deduction: the result is the consequence of the truth included in the initial hypotheses.
          • Induction: infer a particular quality from multiple instances or cases.
          • Abduction (abduction):
            • The Abstain Jump: the result is sufficient and, the assumptions and the conclusion are among the most economical or coherent explanations
            • The logic of intuition: intuitions, but no luck at random
            • listening to nature by instinct and experience: human metadata and training games
            • Plausibility is a crucial element among a range of diverse and possibly conflicting options for safe-to-fail experiences, as opposed to a safe-to-fail design.

Keys of language:

        • A system is any network having coherence
        • An agent is all that interacts in that system (individuals, groups, ideas, etc.).

Three types of primary systems:

          • orderly systems: agent constraints, reductionist and rule-driven, deterministic, observer independence
          • chaotic systems: agents without limitation and independent of each other, studied using statistics and probabilities
          • complex systems: a slightly constraining system for agents, agents modify the system by their interaction with it and between them, they co-evolve (irreversibility).

Aspects of complexity


– Very sensitive to small changes

– The proximity and connectivity of agents have a significant impact

– The meaning emerges from the interaction

– Coalescence not categorization

– The decline does not lead to foresight. Moving from a fail-safe design to a safe-to-fail experiment


– Using distributed cognition: the wisdom but not the madness of crowds

– Working with finely granulated objects: information and organization

– Disintermediation: put decision-makers in direct contact with raw data

Categorization models

Screenshot 2019-07-24 at 17.12.34

Traditional categorization models use a 2×2 matrix. This categorization is useful for exploitation by experts but of little interest for change.

The risk is that we can not observe or see the differences until it is too late.

Screenshot 2019-07-24 at 17.22.59

Scientific management models and systemic thinking (Lean) are categorization models based on the premise that the framework precedes the data. The “Sense-Making” model, on the other hand, assumes that the data precedes the framework.

From the three primary principles, Snowden has evolved the model from three to five elements:

In yellow of the graph, you find the non-ordered systems and in blue the ordered systems. Snowden, in his step, added “disorder” in his centre to be able to make the difference between chaotic system and disorder.

The disorder is an intricate system and unknown as to the chaotic system has fascinating dynamics.

Simple or Obvious

The relationship is evident to every person in the system. Ex. “The alarm sounds, you have to go out”.

Screenshot 2019-08-19 at 11.50.58

The decision model is: feel / categorize / respond. Here, we apply the “best practices”, the standards.

The bicycle test. The bicycle is a simple system because if you dismount it, you can rebuild it to the identical one.


There is a relationship between cause and effect, but the connection is not apparent, and it requires expertise. Here, you must make an effort to maintain the relationship.

Screenshot 2019-08-19 at 11.54.34

The decision model is to feel/analyze/ respond. In these systems, you use good practices.


In a complex system, cause-and-effect relationships are only visible retrospectively because results are always unpredictable and emergent.

The agent-agent relationship continually modifies the system by “safe-to-fail” experiments.

The decision model is Probe / Feel / Reply. That is the area of discovery of new ways of operating, emerging practices, different, unique.

As a horrible metaphor, I’m talking about the frog test. A frog is the result of a complex experience. If we kill it, it is dead, and we can not rebuild it in the same way. (Liz Keogh).

Screenshot 2019-08-19 at 11.57.09


In the chaos system, there is no cause and effect relationship.

The decision model is Act / Feel / Respond. That is the field of novelty, originality.

Screenshot 2019-08-19 at 11.58.25

However, there is still the relational dimension within these systems. It helps us understand the interaction of the agents to ensure which system we are in and how to move from one to the other.

In the illustration above, we see two categories of agents: managers in black and employees in grey.

Screenshot 2019-08-19 at 16.04.30


        • information flows from top to bottom
        • this one is “obvious” for the operational ones
        • there is no necessity of interaction between the lower parts (dotted)


        • it has no interaction between system stakeholders
        • each is decoupled


        • there is no need for a relationship between the operational and the management
        • the system is self-managing in group dynamics


    • all agents in the system are connected
    • however, the management coordinates the information of each agent who remains on their unique position as an expert.

Published by PierreENeis

Certified Agile Coach & Trainer, Organization Developer & Advisor

2 thoughts on “Decoding systems with Cynefin

  1. Hi Pierre,
    thanks for your contribution. You give an good overview about the Cynefin Framework. Although there some minor points which you could update if it makes sense to you:
    1. You write that “Cynefin” is a Gaelic word but actually it is Welsh
    2. A few times you use the term “model” instead of “framework” e.g.: “From the three primary principles, Snowden has evolved the model from three to five elements”
    3. I would add a number or a sub-text to each image, so that is clear which image you are referring to. E.g.: I do not know which image you mean when you write: “In the illustration above, we see two categories of agents: managers in black and employees in grey.” I could not find such image on your blog post.

    I hope my feedback is valuable to you.

    1. Thanks Hogir, I agree on Welsh and that’s a point I need to update in the book too.
      On point 2 you are right too. Intentionally, I didn’t want to dive deeper into Cynefin. Now the terms model or framework is depending on your intention. In my words, a framework is a bit rigged and model distills the idea of a Rosetta Stone for systems.
      For the numbers, that’s a great idea.

      Thank you

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: