This page contains examples of correlations I worked up that held up pretty well as I used them, and ones that didn't.
Requirements Management Effort
At Smiths, I managed a group that did traditional system requirements management. To develop a metric for bidding, I plotted the hours expended by our system engineers (typically one or two thousand) against the number of requirements on a piece of equipment (typically in the hundreds). This included:
the time find the requirements (all in one requirement document? HA! Unlikely!),
put them in a matrix,
allocate them to owners, and
determine the test method for each, and
get it all negotiated and complete.
I noticed there were two pretty linear plots, for two types of customers - one type that was trying to buy what we had, and one type which was very involved in what the final product would be. So the requirements effort could be reliably bid using the expected (or actual) number of requirements, and the type of customer.
Note that this uses two parameters known before the work started - the rough number of requirements, and whether the customer would be heavily technically involved or not. If you have done this, you know that customers are usually hard one way or the other.
This did not include the design effort, of course. A single requirement could require hundreds of engineering hours, while others almost nothing. But the design effort was estimated by the design groups. I was bidding requirements management, and it worked out pretty well for that.
Printed Wiring Board Design
At Eurotech, I developed a metric for PWB layout based on board area. In analyzing our performance, I found that the layout time was directly dependent on the number of pads (duh). But estimating the number of pads before a design started is not easy, so I couldn't use that. But because we kept the boards as small as possible, our pad density per square inch was almost constant across all our layouts over several years. I discovered the manhours required to layout a PWB tracked directly with the board area. I suspect this is a result of many forces (component types, test point philosophy, connectors favored, CM capabilities, etc). Also, if there was room for more circuitry, something was added to increase value. So if the space allocated for the PWB was 4x6, the man hours could be estimated fairly accurately, including rework, communication, etc, without knowing much else about the design. Of course, there was a factor for complexity, such as the number of dense pitch BGAs and component manufacturer design rules.
For both of these, there was an offset, corresponding to startup, and a fairly linear slope after that.
I can't show the actual numbers - these, like your metrics, should not be let outside your organization, except under non disclosure agreement. But since all organizations are different, they wouldn't do you much good anyway.
But, Don, Don! Doesn't the PWB estimate method above assume the final board will be 4x6 inches? How do you know that will be true?
There are many factors that drove the PWB density to be constant at Eurotech. Solutions were implemented to fit as much value as possible into the surface area, rather than let the area expand, or make the board smaller. Once the maximum value fit into the allocated envelope, the design moved on. The real estate would be filled with circuits, either by us or our customer, to maximize the product value. So in the special case of the PWB, it worked.
I agree it would not work if the outline dimension was left to the mechanical designer, to make as large as desired to, for instance, enable single sided components or reduce the number of layers to reduce recurring cost. Cost reduction of this sort increases engineering effort, and is a discrete engineering task. We did this quite a bit at Orbital, where the additional engineering effort was bid as a design to cost activity. This bidding metric would not have worked at Orbital. But it did work at Eurotech, where the most value in a fixed footprint was always what we were looking for.
So this is a good example of how a parametric estimate that works under one set of conventions, will not work at all in another.
WARNING: BASHING TO FOLLOW.....
When I began to estimate software programs, most Software Managers used COCOMO. I studied this, but never understood how it could work. How can you know how many lines of code it will be before you started?
The COCOMO estimates were generated from Man-Hours expended per Completed Lines of Code. This is easy to calculate at the end. At the start you do not know the number of lines of code that you will eventually end up with. And a good estimating metric uses something you know before you start, not after you are finished.
I got a better answer by using the number of pages in the Software Requirements Spec. That tracked pretty well with what we ended up expending to code, test and integrate, much better than COCOMO.
An estimation method should be seeded on facts that are available before the design starts - # of requirements, # of functions, # of functional integrations, memory size, processor speed, Operating system, etc. Not on numbers you only know at the end. The fact that the code size was difficult to estimate is reflected in the typical requirement to leave 50% of the system memory as spare - which is, in effect, stating the predicted accuracy of the lines of code estimate. It could be off by a factor of two! Not useful in real life.
Software estimation has moved beyond COCOMO by now. You can find a bunch of good stuff at the Data and Analysis Center for Software.
I am not just being mean by including this here. I put it here as an example of incorrectly using assumed outputs rather than known inputs to generate metrics based estimates.
On the other hand, I did find that Hardware/Software integration effort directly tracks the lines of new code at the start of integration. In this case, fewer lines of new code probably means fewer errors, less interaction, better partitioning, better developer, all the good stuff. And by the time you get to integration - you do know the lines of code! So the metric works here.
Use the right tool for the job.