Anti-Fragile by Nassim Nicholas Taleb is a goldmine of practical ideas for software developers, despite it not being a software development book.
Redundancy is one example of such an idea that is explored. Taleb explains how having some redundancy reduces fragility, and means we don’t need to predict the future so well. Think of food stored in your basement, or cash under your mattress.
Taleb notes how nature’s designs frequently employ redundancy (“Nature likes to overinsure itself”):
“Layers of redundancy are the central risk management property of natural systems. We humans have two kidneys […] extra spare parts, and extra capacity in many, many things (say, lungs, neural system, arterial apparatus), while human design tends to be spare and inversely redundant, so to speak – we have a historical track record of engaging in debt, which is the opposite of redundancy”
Software source code is a good example of human design that tends to be “spare” (having no excess fat) and “inversely redundant”. Redundancy in code is traditionally avoided at all costs. In fact, one of the first principles that junior developers are often taught is the DRY principle – Don’t Repeat Yourself. As far as DRY is concerned, redundant code is a blight that should be eliminated wherever it shows up.
There are good reasons for the DRY principle. Duplicate code adds noise to the project, making it harder to understand without adding any obvious value. It makes the project harder to modify because the same code must be maintained separately at each place it is duplicated. Each of these locations is also another opportunity to introduce bugs. Duplicate code feels like waste.
However, as Taleb states:
“Redundancy is ambiguous because it seems like a waste if nothing unusual happens. Except that something unusual happens – usually.” [emphasis added]
What are these “unusual things that usually happen” in software development? And how could duplicate code possibly help protect us against them?
The Wrong Abstraction
Firstly, remember that duplication is eliminated by introducing abstractions, such as a function or class. The problem with abstractions is that it is difficult to know ahead of time whether a chosen abstraction is actually a good fit for your project. And the cost of getting this wrong is high. Poorly-chosen abstractions add friction to making the kinds of changes that are actually needed for the project, while still exacting an ongoing cost in terms of complexity. There’s also the risk that by the time poor abstractions have been recognised as such, they have already spread throughout the project. Rooting them out at this point will likewise impact code all throughout the project, potentially with unintended consequences.
The “unusual things that usually happen” in software development are unexpected, unpredictable (and unavoidable) changes in business requirements. These have the annoying effect of revealing the shortcomings of your abstractions, abstractions that you perhaps added while faithfully following the DRY principle.
Too-eager abstraction and a lack of redundancy mirrors the problems of centralisation, another idea explored in Anti-Fragile. Centralisation, while efficient in the short-term (read: less code), makes systems fragile. When blow-ups happen, they can take down (or at least damage) the entire system. NNT outlines in Anti-Fragile how such fragility and lack of redundancy was the cause of the banking system collapse of 2008.
Redundancy in the form of duplicated code, on the other hand, makes code more robust. It does this by avoiding the worse evil of introducing the wrong abstraction. In this way, it limits the impact of unexpected changes in business requirements. As Sandi Metz states: “Duplication is far cheaper than the wrong abstraction”
The Rule of Three
As it turns out, there is another software development principle (or rule of thumb) which does recognise the risks of poor abstractions, and seeks to mitigate them through some redundancy. It’s called the “Rule of Three”. It states that you should wait until a piece of code appears three times before abstracting it out. (Note that this appears to contradict the DRY principle). This minimises the chances that the abstraction is premature, and increases the chances that it addresses a real, recurring feature of the problem domain that is worth the cost of abstraction.
Introducing an abstraction is in some sense a prediction of the future. Abstractions make a certain class of future changes easier, at the cost of some extra complexity and fragility. They are worth this cost if and only if the types of changes they make easier actually turn out to be reasonably common. Following The Rule of Three means deliberately holding off on making a prediction until more evidence has come in. The assumption built into the Rule of Three is that past changes are the best predictor of future changes.
Back to Nature
Now to return to Taleb’s observation of widespread redundancy in nature’s designs. An interesting implication of this is that despite all of the apparent “waste” involved, evolutionary processes have nonetheless converged onto it as the best strategy for dealing with unpredictability – a permanent feature of the real world (or at least, a better strategy than no redundancy – having one kidney, for instance).
At a high level, our software projects and teams are similar in the sense that they exist in a challenging, competitive environment punctuated by unpredictable changes. If meaningful parallels can be made between complex systems, it’s worth considering the possibility that despite the apparent “waste” involved, some redundancy is likewise the best strategy for dealing with the unpredictability in our environment too.
This is all to say: go forth and fearlessly copy-paste more code 🙂