Beliefs Before Agile

Taming Uncertainty

The Software Crisis

In the 1950s, computer programmers wore white lab coats. You couldn’t let mere mortals near the computer – this was a job for scientists. Unfortunately, programming turned out to be a lot less scientific that folks had hoped for. Even in those early days, projects were going wildly over budget, and taking far too long to complete. The software often missed many of its original requirements, and quality and reliability of the requirements it did meet was often pretty poor.

As the capabilities of computer increased over the next decade, the expectations of what could be done with computers increased, and so did the complexity of the programs created to meet those expectations. This meant that the unpredictability of projects didn’t go away – it got much worse. By 1968, leading experts declared a Software Crisis. Unpredictability had to be tamed.

Predict, Plan, Control

The problem, it was decided, was that computer programming wasn’t disciplined enough. People resolved to inject some discipline, in order to give better control over software project risks. The most common strategy was to try to predict far ahead what would happen in a project, make a fixed plan early-on based on those predictions, and then control the project through that plan.

People started to publish very detailed software development processes based on this idea of predict, plan, and control.

Fixed-Requirements Methodologies

These processes soon started to be called methodologies – a rather grand term that has started to fade out in recent years but it still used quite widely.

In the 1970s and early 1980s the methodologies tended to be called structured approaches – because they add some structure to a project. I prefer to call them fixed-requirements methodologies because they achieve that project structure by fixing software requirements in stone.

These fixed requirements methodologies break software development into a series of separate stages – analysis of requirements, design of the software’s internal structure, construction of the software code, testing that the software satisfies the original requirements, and so on. The strategy here is to document a complete and fixed set of requirements up-front, and only then start on design, which in turn is pinned down before any programming is allowed.

Fixed-requirements methodologies, then, reckon that the requirements for an information system can be pinned down up-front and then won’t need to change. Sure, there might be a few maintenance tweaks later on, by these won’t change things too much. This might have been true in the 1970s and into the early 1980s, when information systems were not as complex as they are now, and when the business world move more slowly.

Object-Oriented Methodologies

By the mid 1980s, though, the business world was speeding up. Companies started to change their mind more frequently about what their information systems would do. The software developers complained that customers kept moving the goalposts. Customers complained that the software developers weren’t being flexible.

By the mid to late 1980s, then, people were noticing that businesses in the real world do change, and information systems need to change with them. This meant we couldn’t expect to pin requirements down up-front. Instead, things had to be a bit more flexible.

By the late 1980s and throughout the 1990s, many folks pointed out that objects (i.e. things) in the real business world had certain degrees of flexibility built in. The idea behind the various resulting object-oriented methodologies is to directly mimic in software the real-world things from the business that it will support, so that the software will have the same flexibility as the business. When changes occur in the business, then, they should be pretty straightforward to reflect in those same points of flexibility in the information system.

Shared Beliefs

These various Fixed-Requirements and Object-Oriented methodologies have different fine-print telling you how to go about a project, but they also tend to have a lot of things in common:

  • Put faith in the process: The team isn’t in charge of the process; the process is in charge of the team. Since humans are seen as the weak link in a project, the process is there to stop them doing wrong. People are slotted into specific roles – such as requirements analyst, programmer, tester, and so on. Each has clearly defined responsibilities. The methodology prescribes every single step to be followed by each and every role-player to satisfy all of their responsibilities. Whenever people find room for manoeuvre, this means they have chances to make mistakes. These are seen as unintended gaps in the methodology. Later versions of a methodology then fill in any gaps found in earlier versions, leaving nothing to chance.
  • Focus on creating documents: They emphasise producing plenty of documents and detailed diagrams to capture requirements and designs. Just like building architects produce scale models of the buildings they are designing, these documents and, particularly, diagrams are seen as models of the software to be developed. These models are expected to be as complete as possible – missing nothing out. Anything that goes into the software - when folks got around to creating it – should be traceable back to some detail in these models. Software, then, is seen as the final translation of a series of earlier document-based models.
  • Use documents to communicate: Since folks should be capturing all of the important project information in detailed documents, those documents are relied on to communicate the state of the project between the various people on the project. Documents are far more reliable than other forms of communication because they are permanent – they don’t forget. A methodology can also define standard structures for these documents with placeholders for all required information. A document is not complete until all sections have been filled out.
  • Expect stable requirements and design: They expect requirements to be pinned down up-front. In the case of object-oriented methodologies, there is a little jiggle room to allow requirements and designs to evolve – but only in expected ways. Consequently, the emphasis is on creating "robust" designs providing stable software architectures that fix the internal structure of the software, either completely rigidly, or with limited flexibility where future changes are expected. There is no room for nasty surprises – and especially for unexpected changes.
  • Try to control risks up-front: Requirements analysis and detailed design are seen as the riskiest areas. The fear is that if you get these wrong you might end up creating the wrong software – and fixing that mistake later will be very expensive. Huge up-front effort, then, is spent on risk-reduction by making sure the requirements and the design are perfect. All programming is delayed until the much later "construction" phase of the project. This only starts when the requirements and design have been frozen – usually when everybody has signed off on them. Once construction is complete, the later testing phase checks to see if programmers introduced any errors when translating the design documents into software code. These errors will need correcting, in a later phase called debugging.
  • Emphasise management control: The job of management is to ensure that people and the project stick to the methodology. Management: ensures work is broken down into a sequence of phases; produces a detailed plan and time schedule for those phases; slots people into distinct roles; instructs each person to complete the work dictated by the responsibilities of their role; ensures that each phase ends with completed documents that are then passed on to the start of the next phase; enforces greater control when things miss the pre-planned schedule or whenever unexpected – and therefore unwanted - events occur.
  • Are repeatable: Since the methodology is complete, there is no need to add or remove anything. Management should be able to install the methodology in just about any company, with just about any team, on just about any project, and get predictable results. In the few areas where local conditions do need to be taken into account, the methodology has a fixed number of predefined slots where management can plug in any necessary adaptation.

Magnifying Risks

Unpredictability is inherently risky, so these methodologies try to control risk by making predictions up-front, then not allowing deviation from those predictions later on.

At first, prediction and up-front-planning did help a lot of companies reduce their project unpredictability. In the 1970s and into the early 1980s, businesses themselves were often pretty stable, and changed quite slowly. So you really could, in many cases, work out the requirements of a project up-front, and write up these up in full before starting any programming. By the time you delivered the software, the requirements were either still the same at when you started, or hadn’t budged much.

By the mid to late 1980s, businesses were starting to be more flexible – they had to adapt to outmanoeuvre their competitors and to satisfy increasingly demanding customers. Still, change was considered to be predictable, and specific points of flexibility were added to information systems to deal with it.

Through the 1990s, and now beyond the year 2000, this pace of change has kept on increasing, so that businesses – and the requirements of the information systems that support them – are now frequently far too unpredictable to be planned for completely up-front.

As people have tried to keep using these methodologies in recent times, many have learned that although predictions are certainly comforting – they give you the feeling of control - those feelings might be an illusion.

Predictions, and the plans made from them, don’t make unpredictability go away – they just stop you from being distracted by it. This gives you clear focus, but also leaves you blinkered – you are blind to the changing needs around you.

Nowadays, the original captured requirements – to which the project sticks no matter what – often end up being far removed from the eventual requirements of a fast moving business. The longer the project lasts until it delivers software, and the more rapidly the requirements change, the greater this mismatch becomes. You can end up following the methodology perfectly, creating software that does exactly what the up-front requirements asked for, and then find yourself delivering something that nobody needs anymore.

In summary, these methodologies were designed to control risks. In times of high predictability, they did just that. Now that requirements are changing faster than ever before, these methodologies don’t control risks – they magnify them.

Book

This is part of a book I am writing. More will be unfolding here and on my blog. Ongoing feedback would be much appreciated.