It is obviously a good idea to design the internals of an information system to be clean, tidy, and flexible for future changes. For the most part, that is exactly what people try to do.
Unfortunately, good up-front design isn’t enough. Most information systems with messy internals didn’t start out that way. Instead, when unexpected requirements changes came along, corners were cut here and there to accommodate them as painlessly as possible. The Agile approach realizes that one slip here, another there, and soon you have a tangled mess.
One of the main differences between Agile and most other approaches is that Agile realizes you can’t do all the design work up front. Sure, you can make some good guesses based on likely future changes, but you can’t design up front completely unexpected things.
Many popular non-Agile approaches do lead to information systems with designs that are flexible internally to anticipated changes in their requirements. That sounds good. The problem is that they don’t help us in the long run to deal with unanticipated changes.
These approaches expect you to guess up front what kinds of requirements changes are going to come along. You then design and build the internals of the information system with those up-front guesses in mind. This internal design is then called the information system’s architecture. The idea of an architecture is that some parts of the internals will remain fixed over time, and other parts – where changes are expected – will be made flexible.
Where do decisions about likely changes come from? They can come from wild guesses, but that won’t get you very far. More often, they come from understanding the business that the information system needs to support. If you already know a lot about that business, you are well set. If not, you have to learn about it. This is where domain analysis (more loosely called systems analysis) comes in.
Domain analysis means looking at the parts of a business that need information system support, and abstracting from the details an overview in terms of what tends to remain stable (points of commonality) and what is prone to change (points of variability). You then build the software architecture to reflect these commonality and variability assumptions.
Architectures are built with reuse of the stable bits in mind. There are two kinds of reuse: reuse across space and reuse over time. Reusing an architecture across space means creating a whole range of applications (a product line) at the same time, each satisfying the needs of a particular band of customers by varying at the points of variability.
Reuse of an architecture over time means creating multiple generations (or versions) of a given information system. If requirements never changed, we would only ever need one version of a product. Since requirements do change, we reuse the architecture over time, from one version to the next. This lets us offer long-living products.
Now, maybe you will be lucky. Maybe you can work out up-front what kinds of requirements changes are going to come along in the future. If so, it is going to be plain sailing – all the changes can be handled in the parts of the architecture that were designed to be flexible.
That isn’t very likely though. Domain analysis only gives you a snapshot of what the business looks like at a given time. That is fine for reuse across space, creating a range of products satisfying current customers’ needs. Reuse over time, on the other hand, is a different matter.
Domain analysis doesn’t predict where a business will head in the future. Historically, when businesses were more stable, this might not have been such a problem. Nowadays, as we saw in chapter XXX, businesses have to change continually, in response to ever-increasing demands from customers and unexpected moves by their competitors. This means, of course, that the requirements for the information systems supporting a business will have to change too, and often in unanticipated ways. The upshot of this is that the commonality and variability assumptions encapsulated in a given architecture may not hold for future generations of information systems built on that architecture.
When you come across unexpected changes in requirements, you can cross your fingers and hope that the architecture is flexible enough to absorb them. If not, you have three choices:
- Evolution: change the architecture to reflect new commonality and variability assumptions
- Maintenance: force fit the changes into the current architecture without updating the points of commonality and variability
- Stagnation: Refuse to implement the new requirements
Evolution, then, evolves the current architecture whereas maintenance maintains it. The Agile approach encourages you to evolve the architecture to better handle the unexpected requirements change. Most other approaches take the second choice: fighting with the architecture, until, that is, the architecture becomes so messy that you end up being forced to make the third choice: refusing to accept unexpected requirements changes.
Non-Agile approaches rely more on maintenance for a good reason. Evolution of the architecture can involve quite extensive restructuring of the system's internals. The fear is that this can lead to lots of collateral damage, with worrisome side effects that are hard to find and even harder to fix. Agile approaches have ways to dealing with this, to reduce those fears, and catch and deal with any side effects early.
Non-Agile approaches don’t tend to have such counter-measures, so they shy away from changing the architecture for fear of doing more harm than good. It is less frightening, at least in the short term, to handle new requirements by making localised changes. Maintenance, then, force-fits changes into the internals to impact as few parts of the internals as possible. The hope is that localising changes will limit nasty and hard to detect side effects in other parts of the system.
To recap: If unexpected requirements changes come as a nasty surprise, they upset your architectural assumptions. Evolving the architecture can lead to scary side effects, so non-Agile approaches generally encourage you to ram changes into parts of the architecture that were meant to be stable - hammering a square peg into a round hole. Programmers sometimes call this hacking the software.
Hacking can be highly skilled and creative work, delivering the maximum reward for the least possible effort. Often, development mangers put their best programmers on a requirements change that needs a "really neat hack".
Hacking may well be a compelling short-term strategy for risk avoidance. Unfortunately, your brain then has to grasp not only the actual architecture, but also the subtle impacts of the mutations force fitted into it. A hack essentially layers an implicit architecture on top of the system’s actual architecture. Each additional hack adds yet more complex dependencies, making the implicit architecture more and more convoluted.
Successive waves of hacking, then, lead to an increasingly fragile lattice of implicit architectures, tangled into the system’s legitimate architecture, and circumventing more and more commonality and variability assumptions at each twist and turn.
As the internals get more and more convoluted, the system’s tangled architecture begins to creak under its own weight. It soon becomes fiendishly difficult to unravel and hence comprehend what is going on.
The "big picture" is abandoned and attention is repeatedly diverted instead to low-level implementation details. Further waves of hacking ensue, and the system's architecture continually degrades further away from anything purposeful.
Rather than the architecture being legitimately based on the system’s business requirements, we end up with a hackitecture - an essentially accidental architecture that is a coincidental side effect of the construction process.
Hackitectures, are petrified designs. That is, they are frozen solid, and so, often, are the software developers that have to work with them. Their intellectual capacity to understand the internals is exhausted. They are paralyzed with fear, unwilling to make anything but the smallest changes due to the sheer effort involved. They are scared stiff of causing unseen damaging side effects. Their ability to identify and apply clever hacks diminishes to a point at which the system is declared un-maintainable. At this point the code is in control.
So far we have seen that when you hack an information system’s architecture, you get a short term gain but at the cost of a weakened internal design. If you don’t clean up that design hack soon, it will come back and bite you. If you just keep on hacking, you will continue to weaken the architecture to the point where it is sinking in internal complexity.
Ward Cunningham likened quick and dirty hacks to financial debt, way back at a software conference in 1992, where he said “A little debt speeds development so long as it is paid back promptly with a rewrite … The danger occurs when the debt is not repaid.”
Just like short term debt, then, hacking isn’t always bad. You get some benefit up-front, ahead of having to pay for it off later. A hack can help you get a quick gain. You weigh up the risks of going into debt, and decide the gain is worth it. Maybe a customer has asked for something urgent, and hacking can help you meet the deadline. Maybe a competitor took you by surprise and a couple of hacks help you catch up with them fast.
Debt, though, can be pretty addictive. It almost seems like free money. It isn’t free of course. All debt has a cost. You have to subtract this cost from the value gained by going into debt. The problem is that the impact of this cost doesn’t really kick in until it almost too late.
At first, you rely on debt only for emergencies. Your house has a leaky roof, so you borrow money. Your customer threatens to drop your product unless you add a new feature immediately, so you hack that feature in. This debt doesn’t cause you too much pain, so when the next emergency comes along, you hack a bit more, and go a little bit deeper into debt.
Keep on like this for a while, though, and you soon find your productivity crippled by interest payments. Software development slows right down. It takes longer and longer to manage the tangled mess you face. Adding another hack is no longer a quick and clever trick, but a drawn out sinking nightmare of struggling to unravel the impact of the mountain of hacks that went before you. You spend so much time and energy struggling to service the debt that have you almost nothing left for actually adding any new functionality.
Managers start to be impatient - you keep giving them excuses rather than delivering results. Customers get frustrated – the backlog of requirements that never get implemented just keeps growing longer and longer.
At some point, it becomes more expensive to add new features than the value that those new features could possibly bring to the business. You are then no longer just sinking in technical debt, you are drowning. You have reached bankruptcy.
The first rule of debt management is that you should pay back the debt as soon as you reasonably can. If you don’t, the interest payments will get you in the end.
The Agile approach encourages early repayment. As we will see in later chapters, it has specific practices for doing this. Firstly, it has refactoring, which is a systematic way of re-architecting the information system’s internals, using a series of small steps that make the design progressively cleaner and more maintainable.
Secondly, Agile has specific testing techniques that detect quickly if any changes you make caused any nasty side effects. This stops you living in fear of making changes to the information system’s internals.
The Agile approach, then, keeps the internal design flexible to the needs of all the current requirements, including all the unexpected ones that emerged over time.
Of course, all this comes at a price. Some folks estimate that you end up spending as much as half of your time refactoring and testing, leaving only half your time left for adding new functionality to the system.
That sounds pretty troublesome, but you have to think of it as the price of paying off your debt – and realise that it is saving you a fortune in interest payments. As counter-intuitive as it may seem, heavy investment in refactoring and testing actually increases your productivity. By constantly keeping the architecture clean, you can keep pace with the rate of requirements changes.
Consider the alternative, which is to not pay off the debt at all. As we have seen, many non-Agile approaches do just that. They lead to hackitectures – where the price of not paying off the debt seems small in the short term, but soon crushes you under the weight of interest payments. In the long run, going Agile both helps you go faster, and keeps you in the game longer.
All of this sounds great – but one tricky problem with technical debt is that it is hard to see unless you are actually working daily with the information system’s internals. It is a bit like a colleague driving a fancy car and going on cruises, all the while having mounting bills nobody is aware of. As far as folks outside can tell, things look fine.
When managers and other folks outside your team can’t see a looming problem, they are going to keep pushing you to add more and more requirements. In an ideal world, you would be able to push back and say you need time to clean up. In reality, managers want results not excuses. Developers are put under immense pressure to cut corners. Under that pressure, even the best intentions can slip away – and hacking rears its ugly head again.
As we will see in chapter XXX, the Agile approach doesn’t deny this tendency; it counteracts it with two permanently up-to-date sources of information that make the medium and long term impact of mounting technical debt highly visible to everybody.
Firstly, everybody sees a published and growing backlog of requirements. On its own, this could just mean that customers are being more demanding. The second piece of evidence, though, is a posted graph showing the declining rate of productivity of the team.
Together, this evidence is a wake-up call to everybody – inside the team and out – that technical debt is getting out of hand. It is time to take emergency debt reduction measures – that is, intensive refactoring and testing – to get productivity back to a consistent level again. The requirements backlog and the productivity graph will show you when you have got there.