Many people in the Agile community have embraced complexity. This has had a disempowering effect. Complexity theories tend to dismiss systems-thinkers as being naïve. In reality, however, systems thinking attends to complexity, just differently.
Dr. Eli Goldratt, the creator of the Theory of Constraints, says, in The Choice:
“The first and most profound obstacle is that people believe that reality is complex, and therefore they are looking for sophisticated explanations for complicated solutions. Do you understand how devastating this is?”
“If we dive deep enough, we’ll find that there are very few elements at the base—the root causes— which through cause-and-effect connections are governing the whole system.”
This is especially true in knowledge work.
Charles Perrow, in Normal Accidents: Living With High-Risk Technologies, laid out a deep understanding of complex systems and how they will create unpredictable behavior. He classified two different types of forces that interact with each other: complexity and coupling. How these interact is shown in figure 1.
Figure 1: Charles Perrow’s Interactions Vs. Coupling Table from Normal Accidents.
It is worth noting that when people think of complex systems, they think of Three Mile Island, the Titanic, etc. Those systems would be in the upper right-hand quadrant. But knowledge work (he calls it R&D Firms) are in the bottom left. In this quadrant, we can avoid system failures that are impossible to avoid in complex and tightly coupled systems. Dr. Perrow discusses this here:
Systems with interactive complexity (cells 2 and 4) will produce unexpected interactions among multiple failures. But while these are troublesome and unwanted, they need not bring about accidents—that is, damage to a subsystem or the system. Accidents will be avoided if the system is also loosely coupled (cell 4, universities and R&D units) because loose coupling gives time, resources, and alternative paths to cope with the disturbance and limits its impact. But in order to make use of these advantages of loose coupling, those at the point of disturbance must be free to interpret the situation and take corrective action. Since the disturbances are generally (not always) likely to be experienced first by operators (which include first-line supervisors and other on-duty personnel such as technicians and maintenance), this means the system should be decentralized. These personnel have two tasks: analyzing the situation and acting so as to prevent the propagation of errors. Unexpected and incomprehensible interactions will not allow immediate analysis of the cause of the accident, but given the slack in loosely coupled systems, this is not essential. It is enough that personnel perceive an unwanted system state (even though it is also unexpected and its causes are mysterious), and do so before it interacts with other units and subsystems. To do this they must be able to “move about,” and peek and poke into the system, try out parts, reflect on past curious events, ask questions and check with others. In doing this diagnostic work (“Is something amiss? What might happen next if it is?”), personnel must have the discretion to stop their ordinary work and to cross department lines and make changes that would normally need authorization. This is a part of decentralization. This delay and experimentation is possible because of loose coupling, but required by the interactive complexity.
Perrow, Charles. Normal Accidents (p. 331). Princeton University Press. Kindle Edition.
To state this more clearly, we can avoid disasters if we can avoid coupling and dampen what might become a non-linear event. Disasters happen when a simple event becomes a non-linear event because things are coupled & there is no visibility of what is happening. Complexity is the culprit, primarily because it hides that this is happening.
In knowledge work, it is essential to realize that much of what is going on can be explained through fundamental theories of Flow, Lean, and the Theory of Constraints. However, many relationships are obscured by the complexity of the system. Much of the waste in knowledge work is due to this obfuscation.
We can improve our way of working by creating visibility, adding feedback, & decoupling events to avoid a cascade of errors. While often complex, we can usually see the relationships between people’s actions in knowledge work. We can’t assume a predictable path forward because we can never be sure how people will react. But exploring the causality that is present creates an opportunity for improvement.
Examining this causality is essential for another reason. When you disavow cause and effect because the system is “complex,” you also weaken the likelihood that people will want to change. Why should they if it’s all a guess?. Exploring what cause & effect there is reveals other relationships that can make a difference. Learning & improvement is an emergent process.
The Value Stream Impedance Scorecard is one way of looking at the cause and effect in the system. When one does an informal analysis of challenges Agile teams and organizations are having, I believe the causes of their challenges are (in order of both frequency and impact):
- lack knowledge of first principles
- lack of systems thinking
- the above results in a lack of attention to the value stream
- non-linear events due to slow feedback, lack of visibility, and coupling
A non-linear or “chaotic” event is when a small action causes a more considerable output, such as “the straw that broke the camel’s back.” In knowledge work, this is often the impact a slight misunderstanding of a requirement makes.
Complexity does cause problems, but mostly because when something unexpected happens, the above failures make it difficult for a team or organization to respond. We should consider dealing with internal complexity and the fact that teams and organizations are embedded in other complex systems. We can reduce most of the waste caused by internal complexity by attending to the above. While we can’t predict what will happen in the market or the world, we can respond quickly when needed by being effective internally.
We can mitigate complexity by recognizing that much of the waste present is due to non-linear events. That is, minor errors create big waste. But we can dampen this exponential waste. Virtually all disasters due to complexity are caused by these four factors – complexity, chaotic events, coupling, and lack of visibility. Think of disasters caused by complexity as gun powder blowing up. Gun powder consists of a fuel (charcoal), an oxidizer (saltpeter or niter), and a stabilizer (sulfur) to allow for a constant reaction. Complexity represents the fuel. If you don’t provide the oxidizer and stabilizer, you won’t get an explosion.
You don’t need to manage complexity; you manage what makes complexity risky. Use feedback to mitigate the risk of chaotic events. Avoid coupling when possible and always make it visible.
The claim is not that the world is not complex; the approach to complexity can be what stops us more than the complexity itself. Remember that our systems are more about the relationships between the components than the components themselves. In complex systems (such as knowledge work), some of these relationships are poorly seen or misunderstood. But if we attend to first principles, use feedback to create visibility, and de-couple our work, we can avoid the waste complexity causes. We can also improve our approach in an emergent way. Make predictions based on first principles and the value stream impedance scorecard. These will either be reasonably accurate or will clarify a relationship we either weren’t aware of or which was misunderstood.
We do not need to let complexity stop us.