Going Fast

Posted by Uncle Bob on 01/26/2007

I'm having an discussion on comp.object at the moment about whether or not quick and dirty solutions are really quick.

Over the last 35 years I have learned something about software.

The only way to go fast, is to go well.
Brian Marick recently said it differently. He said:
When it comes to software it never pays to rush.

But this seems to fly in the face of common behavior. Most developer respond to schedule pressure by cutting corners, rushing, or falling back on quick and dirty solutions. The common belief is that these behaviors provide a short term increase in speed, even though they slow you down in the long run. Indeed, this behavior is so prevalent that there are many different anti-patterns named for the outcome. One is called Big Ball of Mud.

Every one of us has been slowed down by messy code. Often it is the same messy code that slows us down over and over again. Yet in spite of the fact that we are repeatedly slowed down by the messes we write, we still write them. Despite the impediment we clearly feel, we still believe that making the mess helped us go faster. This is about as close to to the classical definition of insanity as you can get!

I subscribe to a basic professional ethic: Making messes never makes you go faster, not even in the short term. I gained this ethic after years of proving it to myself the hard way. Quick and dirty solutions feel faster, but simply are not.

Actually, this is easy to prove with simple logic.

Assume that quick and dirty solutions help you go faster in the short term, but cause delays in the long term. We know that the latter is true, because we have all felt the impediment of messes.

Now, a long term is really just a series of short terms. The first short term can be finished more quickly by using a quick and dirty solution. But what about the second? Well, clearly, if our assumption is true then the second short term can also be finished more quickly by using a quick and dirty solution. And so can the third. And so can the fourth. And therefore the long term can be done more quickly as a series of quick and dirty solutions.

But this is counter to our observation that messes slow you down in the long term. Therefore the initial assumption is incorrect. Reductio ad absurdum. Q.E.D. Making messes slows you down even in the short term, no matter how short a term you choose!

But this is more than a simple play on logic. Teams actually behave this way! You've probably seen it yourself. A team makes a mess in order to get done "quicker", and tells themselves they will go back and clean it up later. But later never comes, and at the next schedule crunch the team repeats the behavior. They treat the long term as a series of short term quick and dirty solutions.

It comes down to a matter of professional ethics. Software developers are often disparaged for being sloppy, slow, and unreliable. Whole IT departments and product develoment teams carry the weight of appearing incompetent and moribund. Why? Because they succumbed to the seductive idea that quick and dirty solutions are faster in the short term.

True professionals don't work that way. True professionals keep their code clean at all times. True professionals write unit tests that cover close to 100% of their code. True professionals know their code works, and know it can be maintained. True professionals are quick and clean.


Comments

Leave a response

  1. Avatar
    YAChris about 1 hour later:

    Most developer respond to schedule pressure by cutting corners, rushing, or falling back on quick and dirty solutions.

    It’s much worse than that.

    It took me over twenty years as a professional, but I finally realized what was happening: I’m “graded”, for lack of a better term, on how much code I put out.

    Nope, never been in a shop where “lines of code” was used as a metric, buteverywhere I’ve been I’ve been praised for putting out more code than nearly everyone else there.

    And, you know, that praise thing… we all just want more of that! So I got better at writing faster. Not better, necessarily, just faster.

    Adopting test-first methodologies has always been a struggle for me… I love all the good stuff about it, but my gut reaction is “get this done fast!”

    Luckily for me, I’m now in a team where quality is first and delivery time a very distant second. And I’m even bringing them around to the idea of test-first programming! Well, in my dreams I am, anyway. But they are, perhaps, noticing the tests I have put in… maybe… sigh.

    Okay, still a few more battles out there to be won.

  2. Avatar
    Cristian Pergel about 2 hours later:

    The observed long term slowdowns, even if they occur always, do not imply that each “messy” short term is slower. The developers who take the quick-and-dirty approach more likely think “THIS messy short term will be quicker”, which is consistent with the weaker assumption “some messy short terms are quicker and some slower” (more optimistically: “past messy short terms were slower, but future ones will be faster”). This assumption is in general not invalidated by the observed long term slowdowns.

  3. Avatar
    Scott about 4 hours later:

    For production code, code-for-hire, or any code I intend to keep or maintain I agree completely. Recently my coworkers and I have been debating whether or not this rule necessarily applies to “spike” coding – experiments in code for the express purpose of investigating a small concept or idea, or for learning how a third-party library works and reacts. The danger, of course, is that this spike code will leak out into the “real world”. What would your opinion on this type of coding be with regards to fully testing it?

  4. Avatar
    Uncle Bob about 4 hours later:

    Even for spikes I would not relax my disciplines. There was a time when I thought that spikes would be faster if done quick-and-dirty; but I found that Marick’s rule is right even in that case. “When it comes to software, it doesn’t pay to rush.”

  5. Avatar
    Michael Kirby about 5 hours later:

    I once wrote a simple program for the local little league. Just a little utility. I started doing it without TDD (after all, it’s just a little thing).

    Ughh. It was like swimming through molassas. I couldn’t keep track of what I was doing, I felt loss.

    I deleted everything and started over TDD. Much better, simpler, and I finished it.

    Mike

  6. Avatar
    Anthony Bailey 1 day later:

    I agree that a project cannot be small enough to make it immune to problems from doing things too quick and dirty, and so treating small projects as special cases is an all-too-easy-to-make mistake.

    Even once aware of the false dichotomy though, I think deciding exactly how far to go in keeping all the code clean remains a difficult judgement problem. It’s usually worth going further than your intuition tells you, but some changes for the better can still cost more time than is appropriate.

    And with respect to spikes: I agree its still appropriate to code as carefully as usual. But, I think it’s OK to use techniques and technologies that won’t scale or similar: take advantage of the deliberately limited context of the spike, otherwise there’s no point not going straight to production code.

    (In fact, it could be that spike/non-spike is another false dichotomy… sometimes it seems there’s more of a complicated continuum of assumed contexts.)

  7. Jan Daniel Andersen 3 days later:
  8. The only scenario where I might accept not having clean and testcovered code would be in a spike that was to be deleted as soon as it was done. This way the spike would only serve to provide knowledge to the developer(s) coding it.

    But in my experience spikes are often used by the whole team at different times as reference material. And therefore it HAS to be clean (readable) and covered in tests (documentation).

    I totally agree that quick and dirty is really just dirty. It’s a product of bad planning and the inability to embrace change.

  9. Avatar
    Nathan Henkel 5 days later:

    I think Bob’s logical “proof” ignores too many details to convince anyone except the already convinced, and even for one of those (me) it doesn’t seem quite right. Of course, I’m not sure even a more detailed account would convince anyone, but at least it seems closer to right to me:

    Here is our central assumption:

    Quick and Dirty (Q&D) increases our speed in the short term, but applies a long-term penalty.

    So, suppose a Q&D solution takes half the time to develop as a proper, clean solution. Also suppose that a Q&D solution applies an overall 20% penalty on future work on the same project, to reflect the ugliness of working with the Q&D code, and also the spreading rot it will tend to create throughout the project. We have been asked to do what we think is a 120 point project in 10 weeks. Our normal weekly velocity is 10. For some reason, we tell our customer we can do it in 10 weeks.

    Week 1 – our normal velocity would be 10, but, knowing we have to go extra fast on this project, programmers Q&D it up to 20 and do 20 points of work. 100 to go.

    Week 2 – we are still behind, but we feel a bit out of control, so we slow it down to normal, and get 10 – 0.2(10) = 8 points done. 92 to go.

    Week 3 – not feeling too far behind, we go at normal velocity, and get another 8 points done. 84 to go.

    Week 4 – 8 more, 76 to go.

    Week 5. The end of this week is the halfway point. We’re 16 points away from the halfway point. The temptation is strong, as is the pressure from management. Programmers Q&D it up to 20 – 0.2(20) = 16 points. We’re on schedule! 60 points to go. Only now our penalty is, say 40%.

    Week 6. Back to steady progress, but now we only get 10 – 0.4(10) = 6 points done. 54 to go. Uh oh.

    Week 7. Maybe we’re disciplined and hold on a little longer. 6 more points. 48 to go.

    Week 8. Discipline wins again, 6 more points, 42 to go.

    Week 9. Panic wins the day. We Q&D it up to 12. 30 to go, but the penalty is 60%.

    Week 10. Of course we Q&D it. We know it’s hopeless, but we hope we can get an extension if we’re close. We Q&D it up to 20 – (0.6)20 = 8. 22 to go, and our penalty is 80%.

    At this point, if we Q&D it any more, our code becomes unworkable. Our normal velocity is now 2. It will take 11 MORE weeks to finish if we can maintain discipline. Perhaps the customer sees 78% completion and extends the deadline. They’ll regret it. Why, oh why didn’t we push for this extra time at the beginning?

    This is oversimplified as the penalty is probably on a sliding scale. Also, you can probably never really double your weekly velocity, but if you did, the penalties would be at least this severe. Also, Q&D isn’t really a “strategy” that you can apply as a team—individual programmers decide how they are going to do things based on their training, their level of discipline, and the pressures they feel from others.

  10. Avatar
    Nathan Henkel 5 days later:

    Oh, and one other simplification I forgot. Q&D is, at best, a gamble as increasing velocity goes. It’s not only guaranteed to slow down later work, it’s very possible that in the middle of doing Q&D work, you’ll program yourself into a corner and get no more, or even less done than you would have “going well”. At that point, you’ll either have to start over (big immediate penalty) or wriggle your way to something that will work, incurring all the future penalties for your mess for no short-term benefit.

  11. Avatar
    Wilhelm Schwarz about 1 month later:

    Most software projects are done in a “hurry up” fashion and they fail miserably, because the code is so buggy that stabilizing the project is impossible.

    Using unit tests, you can avoid checking in buggy code. This way you can measure if “hurry up” or working overtime is helpful for the project.

    If you measure the effort and the project advance, you will realize that it is not faster to write sloppy code, not even for spikes or prototypes.

    Also you will realize that removing unnecesary code is a boost to productivity.

Comments