The Scatology of Agile Architecture
The Scatology of Agile Architecture
Posted by Uncle Bob on Saturday, April 25, 2009
One of the more insidious and persistent myths of agile development is that up-front architecture and design are bad; that you should never spend time up front making architectural decisions. That instead you should evolve your architecture and design from nothing, one test-case at a time.
Pardon me, but that’s Horse Shit.
This myth is not part of agile at all. Rather it is an hyper-zealous response to thereal Agile proscription of Big Design Up Front (BDUF). There should be no doubt thatBDUF is harmful. It makes no sense at all for designers and architects to spend month after month spinning system designs based on a daisy-chain of untested hypotheses. To paraphrase John Gall: Complex systems designed from scratch never work.
However, there are architectural issues that need to be resolved up front. There aredesign decisions that must be made early. It is possible to code yourself into a very nasty cul-de-sac that you might avoid with a little forethought.
Notice the emphasis on size here. Size matters! ‘B’ is bad, but ‘L’ is good. Indeed,LDUF is absolutely essential.
How big are these B’s and L’s? It depends on the size of the project of course. For most projects of moderate size I think a few days ought to be sufficient to think through the most important architectural issues and start testing them with iterations. On the other hand, for very large projects, I seen nothing wrong with spending anywhere from a week to even a month, thinking through architectural issues.
In some circles this early spate of architectural thought is called Iteration 0. The goal is to make sure you’ve got your ducks in a row before you go off half-cocked and code yourself into a nightmare.
When I work on FitNesse, I spend a lot of time thinking about how I should implement a new feature. For most features I spend an hour or two considering alternative implementations. For larger features I’ve spent one or two days batting notions back and forth. There have been times when I’ve even drawn UMLdiagrams.
On the other hand, I don’t allow those early design plans to dominate once I start TDDing. Often enough the TDD process leads me in a direction different from those plans. That’s OK, I’m glad I made those earlier plans. Even if I don’t follow them they helped me to understand and constrain the problem. They gave me the context to evaluate the new solution that TDD helped me to discover. To paraphrase Eisenhower: Individual plans may not turn out to be helpful, but the act of planning is always indispensable.
So here’s the bottom line. If you are working in an Agile team, don’t feel guilty about taking a day or two to think some issues through. Indeed, feel guilty if youdon’t take a little time to think things through. Don’t feel that TDD is the only way to design. On the other hand, don’t let yourself get too vested in your designs. Allow TDD to change your plans if it leads you in a different direction.
Comments
Iain Holder 5 minutes later:
LDUF – Brilliant – that’s going straight in my acronym bag.
Eivind Nordby 37 minutes later:
Thanks, Bob. I’m glad you mention this. It’s useful to spend some “UML Design” time before diving into solving a new user story, with the motto “We’re doing design, the design is wrong, and that’s ok”.
Brett L. Schuchert about 3 hours later:
Eivind wrote:
“We’re doing design, the design is wrong, and that’s ok”.
I’ve heard people refer to things like this as:
Our best wrong guess.
I like that. As Bob mentioned (from Eisenhower): the plan is nothing, planning is everything. Feedback provides orientation. So we need to find a balance between planning and sticking our heads up out of our gopher holes to get some feedback.
Philip Schwarz about 4 hours later:
Hi Uncle Bob,
Always looking forward to your next post. Any thoughts about the following?
In Extreme Programming Explained – Embrace Change 2nd Ed, Kent Beck says:
McConnell writes, “In ten years the pendulum has swung from ‘design everything’ to ‘design nothing.’ But the alternative to BDUF [Big Design Up Front] isn’t no design up front, it’s a Little Design Up Front (LDUF) or Enough Design Up Front (ENUF).” This is a strawman argument. The alternative to designing before implementing is designing after implementing. Some design up-front is necessary, but just enough to get the initial implementation. Further design takes place once the implementation is in place and the real constraints on the design are obvious. Far from “design nothing,” the XP strategy is “design always.” The following graphs help you visualize for yourself when you should design.
...
Each graph will have three points, one for how you would design “by instinct”, one for how you would design if you thought about the design really hard, and one for how you would design in the light of experience.
...
The relationship of the three points and the location of the minimal design threshold will suggest whether designing up front is even an option for you, or whether you would be better off with incremental design.
Here is my attempt to reproduce three of his graphs:
doktat about 5 hours later:
Ah yes – I totally agree but you have triggered me to pose a question. First a little context. During this past week my software design course discussed the “Big Ball of Mud” paper by Foote and Yoder [http://www.laputan.org/mud/].
On page 10 the following is stated:
“When it comes to software architecture, form follows function. Here we mean “follows” not in the traditional sense of dictating function. Instead, we mean that the distinct identities of the system’s architectural elements often don’t start to emerge until after the code is working.”
In your discussion, you stated:
“However, there are architectural issues that need to be resolved up front. There are design decisions that must be made early.”
Are you using architecture and design in these two sentences to mean the same thing?
Is there a need to define a word for the structure that we create when beginning the design versus a word for the structure we achieve when we refactor? – after gaining more knowledge about the domain and how the software works?
Philip Schwarz about 5 hours later:
@Uncle Bob
You said: One of the more insidious and persistent myths of agile development is that up-front architecture and design are bad; that you should never spend time up front making architectural decisions. That instead you should evolve your architecture and design from nothing, one test-case at a time.
In Extreme Programming Refactored: The Case Against XP, the authors kick off the chapter on Emergent Architecture and Design by juxtaposing two carefully chosen extracts (attributed to Ron Jeffries) from Extreme Programming Installed:
Get a few people together and spend a few minutes sketching out the design. Ten minutes is ideal – half an hour should be the most time you spend to do this. After that, the best thing to do is to let the code participate in the design session – move to the machine and start typing in code.
Philip Schwarz about 5 hours later:
...we compare emergent design with its deadly rival, up-front-design with early prototyping (our colleague Mark Collins-Cope recently gave up-front design its own acronym, JEDI – Just Enough Design In-front)
Llewellyn Falco about 6 hours later:
In my mind Agile means “reducing the costs of mistakes”
As it applies to design; this means if you designed the wrong thing, or just neglected that part of the design, that’s ok. it can be refactored to the right design cheaply.
It this refactoring that allows for ‘emergent design’.
Nowhere have I seen something that advocates actively making mistakes over not making mistakes, unless the cost of not making the mistake is higher than the cost of mistake.
In other words, do not pay $10 to insure a $9 item.
And this is where my ‘gut’ kicks in. If I can code something in 5 minutes, or talk about it for 20, let’s code. If it’s going to take a day to code, let’s talk about it.
J. B. Rainsberger about 7 hours later:
I have told people this for years:
You will probably start practising “test-first programming”, which I say to mean drawing your detailed design up front, then using tests to ensure you type in your design correctly. Soon you’ll begin to notice that making tests pass causes you to change some of the design you’d diagrammed. When that happens, try not designing that kind of thing up front next time. Over time, you’ll find you need less and less design up front. You might even experiment with doing no design up front and seeing what that teaches you about how little design you can get away with. At equilibrium, you’ll find the right amount of up-front design for you.
I ask you and everyone else: do we need any more than that, really?
Nathaniel Neitzke about 9 hours later:
I really believe J. B. Rainsberger hit the nail on the head. It comes down to skill and experience. If I am on a team of highly skilled developers with plenty of experience then a solid design would probably evolve from doing a form of behavior driven development with very little if any design up front. It is all about communication! I cannot stress that enough, collaboration is huge especially the early stages of a sprint.
If one is not fortunate enough to work on a full team of skilled developers with plenty of test driven experience (which is often the case), then yes I agree some guidance is very important.
But I see too often how just doing a little “architecture” can lead to massive over-engineering and end up costing the project a lot of time and money.
I am currently working on a project where there was a lot of design that went on yet the architecture has some serious problems. So yet again I argue that it comes down to the skill and experience of your developers.
ErikFK about 12 hours later:
As in many situations in life there can hardly be a clear cut rule saying “thou shalt not make upfront architecture/design”.
It simply depends. On the problem, on the (technical and probably even more on the soft) skills of the team, on the domain, on the company’s culture etc.
I’ve been both in situations where I wondered a) why we took so long discussing about the architecture when a few days implementation showed how vain our efforts were or b) why we didn’t take a bit more time to get a common understanding of the prospective architecture/design as it would quite certainly have saved us much “coding into a dead end” and the subsequent refactorings.
At the end of the day though I’d agree with J.B. – but probably primarily because I’ve so far always been working in companies where there was some sort of upfront architecture/design “holy cow” and I hardly ever had the feeling that the result of that upfront effort was really worth it. And JEDI sounds marvelous – if we only could measure “just enough”...
And definitely yes: the upfront architect/design can be no more than a first attempt, an initial guidance, that has to adapt to the stubborn facts of testing/implementing. Architecture/design are living things – not sentences written/pictures drawn in a document.
The software architect (I think we need this role – I wonder what’s the feeling of the commenters and readers of this column BTW) is in this context a person who has the skills and the will to prevent things from getting out of hand and keep the whole software “consistent” (avoiding design island that can hardly communicate with each other or even competing architectures fighting each other until someone throws the towel…).
The core message of Agile for me is “Think for yourself, communicate, take responsibility and be willing to learn” – but that’s of course only one of billions of possible interpretations. Starting from there anything goes, depending on the concrete context – though some paths certainly look more promising than others…
Philip Schwarz about 14 hours later:
Uncle Bob said: Size matters! ‘B’ is bad, but ‘L’ is good. Indeed, LDUF is absolutely essential. How big are these B’s and L’s?
What do people think about what Martin Fowler said in this interview:
Before I really came across refactoring, particularly in conjunction with automated testing, I tended to look at design as something I have to get right at the beginning. Now, I look at design as something I can often do a fairly small amount of up front. I let most of the design flow from the evolutionary process. So I feel that there’s been a shift in balance. Before, I might have preferred—and these percentages are purely illustrative —80% of my design in planned mode and 20% of it as the project went on. Now I’d perhaps reverse those percentages.
Philip Schwarz about 18 hours later:
Hi Bob,
You said: There are design decisions that must be made early. It is possible to code yourself into a very nasty cul-de-sac that you might avoid with a little forethought.
Does your LDUF typically cover some or all of the FURPS+ -ilities of a system? Does it cover infrastructure? e.g. How aggressive an XPer are you?...do you sometimes delay putting in a database until you really know you’ll need it? Or do you work with files first and refactor the database in during a later iteration?
Do you consider security, transactions, and internationalisation as hard problems that require up-front design, or do you sometime refactor them in later?
For the benefit of less experienced readers (if any), let me provide some background with excerpts from various sources…
In the Unified Process, requirements are categorized according to the FURPS+ model:
Functionality
Usability
Reliability
Performance
Supportability
...
The + in the FURPS+ acronym is used to identify additional categories that generally represent constraints such as:
Design requirements
Implementation requirements
Interface requirements
Physical requirements
... Using the FURPS+ classification we can see that:
“The product will be localized (support multiple human languages)” is a supportability requirement.
“The persistence will be handled by a relational database” is a design requirement.
“The database will be Oracle 8i” is an implementation requirement.
Craig Larman in Applying UML and Patterns : an introduction to object-oriented analysis and design and the Unified Process – 2nd Ed
Some of these requirements are collectively called the quality attributes, quality requirements, or the “-ilities” of a system. These include usability, reliability, performance, and supportability. In common usage, requirements are categorized as functional (behavioral) or non-functional (everything else); some dislike this broad generalization, but it is very widely used.
...the quality attributes have a strong influence on the architecture of a system. ...it is usually the non-functional quality attributes (such as reliability or performance) that give a particular architecture its unique flavour, rather than its functional requirements.
In the UP, these factors with architectural implications are called architecturally significant requirements [or for brevity, “Factors”]...One could say that the science of architecture is the collection and organisation of information about the architectural factors…The art of architecture is making skillful choices to resolve these factors, in light of trade-offs, interdependencies, and priorities.
Architectural analysis methods and books often implicitly encourage waterfall-style extensive architectural design decisions before implementation. In iterative development and in the Unified Process,apply these ideas in the context of small steps, feedback, and adaptation, rather than attempting to fully resolve the architecture before programming. Tackle implementation of the riskiest or most difficult solutions in early iterations, and adjust the architectural solutions based on feedback and growing insight.
Martin Fowler in Is Design Dead? (last significant update: 2004):
Can we use refactoring to deal with all design decisions, or are there some issues that are so pervasive that they are difficult to add in later? At the moment, the XP orthodoxy is that all things are easy to add when you need them, so YAGNIalways applies. I wonder if there are exceptions. A good example of something that is controversial to add later is internationalization. Is this something which is such a pain to add later that you should start with it right away?
... the most aggressive XPers – Kent Beck, Ron Jeffries, and Bob Martin – are putting more and more energy into avoiding any up front architectural design. Don’t put in a database until you really know you’ll need it. Work with files first and refactor the database in during a later iteration.
I’m known for being a cowardly XPer, and as such I have to disagree.I think there is a role for a broad starting point architecture.Such things as stating early on how to layer the application, how you’ll interact with the database (if you need one), what approach to use to handle the web server.
... So my advice is to begin by assessing what the likely architecture is. If you see a large amount of data with multiple users, go ahead and use a database from day 1. If you see complex business logic, put in a domain model. However in deference to the gods of YAGNI, when in doubt err on the side of simplicity. Also be ready to simplify your architecture as soon as you see that part of the architecture isn’t adding anything.
Ron Jeffries in XP Installed (2000):
All too often, projects go dark for a few months at the beginning while the programmers build some absolutely necessary bit ofinfrastructure. Usually the team really believes that it’s necessary, and that it will make things go faster in the long run.
YAGNI: “You Aren’t Gonna Need It.” This slogan, one of XP’s most famous and controversial, reminds us always to work on the story we have, not something we think we’re going to need. Even if we know we’re going to need it.
..we do know that sooner or later you’re going to want to have a database. But if you are able to deliver business value for a few iterations without one, just writing the information to files and reading it back, the cool thing is that the database schema will stop changing so much. The record definitions, the fields, and the formats will stabilize as the customers get to see the system in operation. Once things stabilize, it’s no big deal to go in and set up the database. Sure, it’ll still happen that you need to change it, but you’ll have most of the structure in hand, and refactorings won’t be so frequent or severe.
But wait, won’t it be really hard to convert the system to use the database after making it run on files? That could take ages.
Go back and read Code Quality, especially rule three, “Say everything once and only once.” Quality, well-factored codefor your file-based application will have just one place that reads each record type, and one place that writes it. Those places will converge down to just one place that reads an arbitrary record and just one that writes. Those are the only places where the database access code will have to go.
where possible, associate infrastructure explicitly with customer value. And where possible, do infrastructure tasks very incrementally, a little bit with each story. When your courage is high, try extra simple solutions, then watch how they work out. If your anxiety level gets high, go ahead and put in as much generality as you think you need but just enough for right now.
The proprietary file storage solution that Jeffries recommends can only result in a lot more coding, because there are issues such as concurrency, file locking, transactions, data integrity, data marshalling, constraints, and so forth that we would have to write ourselves for a multiuser system.
This wouldn’t be apparent at first; files would at first seem like the simplest way forward, assuming that the way forward is is only for the next couple of weeks. However, gradually all of this extra code would need to be written as these problems pop up(most likely reported by users who are being forced to use our half-baked “database” system) until we suddenly find that we’we written our own in-house version of SQL Server.
If we chose instead to write to the “more complex” database first, all of this would have been taken care of for us.
From Jim Shore’s 2004 article Continuous Design:
Initially a skeptic, I’ve been experimenting with continuous design for four years, and it’s changed the way I program.
... My upfront designs became simpler and simpler…then disappeared entirely.Today, when I start a project, I actively avoid deciding on the design. (That’s harder than it sounds). I implement the first feature, see where it takes me, implement the second one, and refactor. As someone who used to strongly advocate up-front design, I’m surprised by how successful this has been. My designs are actually simpler and easier to extend than they were with up-front design.
... How far can you take continuous design before it breaks down? Some decisions are difficult to change. Even enthusiasts of continuous design will cite “hard problems” such as security, transactions, and internationalisation that require up-front design. Or do they? The line isn’t as clear as people think. I’ve evolved each of these problems using continuous design. The better the application met the continuous design goals [following goals defined in sidebar: DRY, explicit, simple, cohesive, decoupled, isolated, present-day, no hooks], the easier the change was.
Ron Jeffries about 19 hours later:
One thing that I do not see is why we would imagine thatless-qualified people would do better with up-front design unsupported by code. They are relatively clueless. Their ideas are more suspect, not less.
I think about design all the time. Almost always before starting something, I think about design and enjoy talking about it. At the same time, I’ve been designing for a half-century and my thoughts about design are pretty well-informed. Even so, I discover almost invariably that my thoughts aren’t good enough and that important things have eluded me.
I do not conclude from that that I should do more up front design, or try harder. I conclude that I should find out what the code thinks as soon as I can.
If I were less skilled at design than I am, why should I speculate longer?
unclebob about 21 hours later:
Does your LDUF typically cover some or all of the FURPS+ -ilities of a system? Does it cover infrastructure? e.g. How aggressive an XPer are you?...do you sometimes delay putting in a database until you really know you’ll need it? Or do you work with files first and refactor the database in during a later iteration?
Do you consider security, transactions, and internationalisation as hard problems that require up-front design, or do you sometime refactor them in later?
Each of the issues you have brought up was something we encountered in FitNesse. In each case we deferred the decision until necessary. In the case of the database, we found we didn’t need one and simply left the file system approach in place. Security, transactions, and internationalization were all left till later, and have caused no particular difficulty.
OTOH I think we made a fundamental error in FitNesse by using regular expressions to parse the wiki text. This decision was easy to make, and has supported FitNesse for years; and yet has always been an impedance. That impedance has grown significantly and has finally become a fairly large problem.
In most cases I’d say that the early benefit we got by using regexp was the right trade off to make. However, in this case it would have cost us very little to put in a LALR parser (like ANTLR or something). That would have saved us a lot of fussing around and I would not now be facing the prospect of ripping out the whole parser and rewriting it.
Ryan Shriver 1 day later:
Great post and I agree – a responsible amount of design up front is not only prudent it can significantly mitigate risk the system doesn’t meet some required quality levels. But “design” must not only include what runs in the JVM / CLR, it must include the “whole system”.
In my experience, the biggest area architects (agile or not) too often ignore are the system qualities (throughput, scalability, maintainability, portability, recoverability, etc.). These aren’t merely acceptance criteria on user stories, they need to be defined and managed with the architecture – otherwise a designer is not sure what levels the system must meet (and the stakeholders are willing to pay for).
I’ve proposed a way to do this in my Agile Engineering for Architects presentation ( http://bit.ly/wlBre). It’s based upon Tom Gilb’s wonderful work on quantification of system requirements (http://www.gilb.com). You don’t have to do Big Upfront Design – but you must quantify the key system qualities and explore how design ideas will help meet these levels.
IMO, architects must be actively involved in both defining qualities AND designing to meet them. Doing this before iterating and writing code, as described in your post, would be the perfect time for this.
Curtis Cooley 3 days later:
I can certainly attest that the code we’ve written after a short design session was orders of magnitude cleaner than the code we simply tried to TDD. And the code we “tested after” was even less clean. I can also attest that the longer the design session goes, the farther from the reality of the system it gets. I agree with Ron that we should design all the time. You are correct that too many Agile projects assume no design and the design will take care of itself.
As far as how big ‘L’ is, I like the distance analogy. If the perceived end of the project is 9 months away, architect to the level of detail that you can ‘see’ from that far away. If it’s 6 months, add more detail, if it’s 12 months add less. Always always always revisit the architecture. Revisit the design.
Evolutionary design means the design evolves from constant attention, not that the system will just design itself if we do A, B, and C really really well.
John "Z-Bo" Zabroski 6 days later:
Philip,
What is your secret to quoting so many books? Are you a human being or a markov chain algorithm running as a background process on Bill Gates’ personal supercomputer?
Philip Schwarz 7 days later:
@John Zabroski
I’ll take that as a compliment.
I am afraid there is no secret: I just spend too much money on books, and too much time reading them; the rest, as explained by Andy Hunt in Pragmatic Thinking & Learning – Refactor Your Wetware, is done by my standard-issueR-mode CPU:
Your brain is configured as a dual-CPU, single-master bus design. ...CPU #1 is … chiefly responsible for linear, logical thought, and language processing…CPU #2 ... is more like a digital signal processor. It’s your brain’s answer to Google: think of it like a super regular-expression search engine, responsible for searching and pattern matching…
These two CPUs correspond to two different kinds of processing in your brain. We’ll call the linear processing style of CPU#1 linear mode, or just L-mode. We’ll refer to the asynchronous, holistic style of CPU #2 as rich-mode, or R-mode for short…
the R-mode search engine isn’t under your direct conscious control…R-mode is running as a background process, churning through old inputs, trying to dig up the information you need, and there is a lot of it to look through. ...
R-mode is unpredictable at best, and you need to be prepared for that. Answers and insights pop up indepedently of your conscious activities, and not always at a convenient time.
John "Z-Bo" Zabroski 8 days later:
It is a compliment. You should set-up a tumblelog where you backtrack link-to all the blog comments you make and book references. It is then an informal, ad-hoc book review system.
I do read a lot, too, but your synthesizing skills are off the charts.
Peter 10 days later:
Sir, why are you being called “Uncle Bob”? Is there a story behind it? Didn’t find anything on the Web or your Wikipedia article. Thank you for a brief explanation.
??????? ??? 2 months later:
??? ? ?? ? ?? ??
Chris Bird 3 months later:
Erik said the following:
“The software architect (I think we need this role – I wonder what’s the feeling of the commenters and readers of this column BTW) is in this context a person who has the skills and the will to prevent things from getting out of hand and keep the whole software “consistent” (avoiding design island that can hardly communicate with each other or even competing architectures fighting each other until someone throws the towel…).”
a while back.
Isn’t figuring out what the “design islands to avoid” about the system architecture in general. For example we generally know that it is a “good idea” to use certain kinds of patterns – generally like loose coupling, generally like the ability to scale components independently – we have seen mainframe monoliths where to get mode DB performance we need a fork lift upgrade. So there are whole lot of principles that good designers/architects/developers have in their hip pockets. As we put these things together we have, guess what, an ARCHITECTURE. It’s probably a good idea to get that stuff done up front so we all know what principles we are espousing.
Eventually the software we build will have to be in production (no snide comments about canceled projects here!). One principle we developers need to espouse is a principle that makes the solution resilient enough for its production environment. Knowing the desired resilience affects/constrains design/architecture. My quick Sudoku solver is rather different from software that controls a heart monitor.
I love the LDUF and absoulutely agree that the size of the L is rather important. There is an essence that we need up front. The challenge is getting it right. The Goldilocks principle seems to be a nice way to look at it.
William Martinez Pomares 9 months later:
Great to know Uncle Bob thinks similar about that myth. I did touch somehow the topic of design’s bad name in Agile in a post: The Agile missing point and the Waterfall Illusion
There is Design as a Documentation Excercise, and Design as Coding. (actually, I’m polishing another post about coding vrs development where I mention Just code is not the solution)
Coding to me is the actual description of the solution in a language, not necessarily executable.
Now, you can describe the solution at strategic, tactical and operational levels. The last one is executable code.
That means, Design should be done at tactical level, where you decide if using message queues or RPC, not at operational level where you decide if using an if or a switch. If a Design description reassembles every line of code to be written, is a totally non-sense. They should be at different abstraction levels
The people that thinks design is documentation, are way-off. People that think design is coding at early stage, should focus on a tactical view of the situation, with major classes and important relations, not all the classes and interfaces.
Design is always revisited. Do design to start operational coding, and then do more design or adjust the old one based on operational coding output.
Another point: Moving design around makes no sense at all. If you read my post, there is not way to make design after operational coding. You just chop design and do so little before coding one line, that looses its strength. You just made a waterfall into a rain. But water still falls. In other words, design does not got out of code (maybe documentation does), design is created before operational coding in little chunks, and documented after-wards.