You have to estimate an engineering task, in man-hours.
How in the world does anyone do this?
How will I know the numbers are even close?
Are all engineering bids just wild ass guesses (WAGs)?
Gee, maybe I don't have to do this. If I put it off, maybe they won't notice.
Not likely. Nobody buys something if they don't know what it costs, or is likely to cost. Someone will do an estimate. Or the job won't happen. In fact, one of the things that got me into management was considering who would do the estimate if I didn't!
Actually, there is a science to this. Project planning is a soft science, so the results will always have a sizable tolerance. But that tolerance can be controlled if you use the methods below properly. Things work out a lot better for everybody if you start with a good estimate.
The methods described in this article all have their advantages and disadvantages. This is to help you pick the right tool for the job.
This is about creating estimates, not bids. If you are not sure of the difference, look here.
PMBOK calls this Analogous Estimating. To do this, you need to have a system that tracks work by biddable effort. Fortunately, if your company uses a WBS structure, this is not too difficult.
The basic steps to do this :
Use you knowledge of the company to find similar tasks to the one you are anticipating.
Look up how much was spent on those tasks. A finance person should be able to get this for you, or maybe you can run the reports yourself. You would ask for the actuals on XYZ project, listed by task, or by the engineer type, or a name run of hours expended. For instance, if you know who did the PWB design, then you can use a name run by project to see how many hours were expended doing the various designs.
This method also implicitly follows how financial data is tracked by your system. This is how you should bid it. That way it is meaningful to those who will be reviewing, fact finding, or using it later. Bidding a job using the finance system work divisions shows respect for that department and will make you friends you will probably need.
This actual data will include more than the man-hours. It should also include the level of engineer used, the external requirements, and the type of program. You won't find something of exactly the same complexity, so you probably have to apply a factor to it. Some estimate sheets require an actual from a similar task to be listed, so it can be examined during an independent review (commonly known as Fact Finding).
The larger the number of man-hours in the estimate, the better this works, due to quantum effects. Bigger numbers will more accurately reflect your organization. For very small efforts, the skill of the individual and exact conditions may render actuals pretty useless. It also works much better when you have many, similar tasks worked in similar situations. This is very applicable for bug fixes in large applications, but if you have a combination of new methods, new people, and new product concepts, you really shouldn't base your estimate on actuals.
Note that this method includes all work charged to the effort, without regard to effectiveness. It includes errors and rework. It also may include inconsistent time reporting, so be sure to review it to make sure the data is usable.
Example: You locate actuals for designing and testing a similar FPGA to be ready for integration. It shows 250 hours total. Another for an FPGA twice the size took 600 hours. You are estimating an FPGA somewhere in between, so 400 hours should be reasonable. A factor would then be applied for the difficulty of the task, and the level of the engineer working it.
This has many advantages - it automatically factors in all sorts of influences of your organization and personnel, it includes stuff like reviews and meetings and travel and other stuff that is easily left out, and also includes the design quality of your organization - because any backtracking is included too.
As a rule, the final expenditure on the task you are bidding should end up being less than your factored actuals due to improvements in tools, methods and personnel. People know this, so expect your estimate to be whittled down some. There may be other environmental changes that you must factor in too.
PMBOK refers to this as Parametric Estimating. This is the science of studying a constrained set of tasks, to build an estimating model that can be applied based on quantitative factors you know prior to execution of the task. Metrics are a formalization of the actuals analysis process described above. The goal is to allow anyone to estimate jobs based on a rulebook. These are created by analyzing actuals, to see if they correlate to any of the incoming design parameters (size, weight, thickness of spec, etc). You create formulas and test them against empirical evidence. Then you use your rulebook to bid the job. For practical examples of correlations using metrics I developed that held up pretty well as I used them, you can look here: Using Estimating Metrics.
This is pretty hard do do when estimating engineering, because the industry and techniques change so much. And there are environmental effects to consider. Some metrics work well in one system, and are totally useless in others.
I have reviewed COSYSMO, but there are probably lots of other metric systems. I recommend reading up on the state of the art, then build your own model. The example in Using Estimating Metrics shows that what works in one design culture will not necessarily work in another. You can also find a bunch of good stuff at CSIAC developed by the Data and Analysis Center for Software. Look for correlations between the facts that were known at the start, and actual expenditures you can dig up.
PMBOK calls this method Bottom Up Estimating. In this method, the effort is broken down into very discrete, easily understood incremental tasks. Typically each individual task would be no more than a man week. Read the specs, write the models, build an image, prepare the reviews, test functionally, rework, test at speed, etc. These are broken down into understandable work blocks, that you can reasonably apply a duration and manloading to. They are then summed to get a complete estimate, which gives you an exact number.
This is a bit of work, but most functional managers do this for the process anyway, and will pretty much have this. This is the goal all of the engineers should be working towards- it is why you did the training, bought the software packages, added the server, etc. The final effort expended will probably be higher than what is calculated using this method. What a process based bid does not contain is the normal technical issues and blind alleys that are the part of every design. As a result, this estimate process produces a result that is typically low, and must have a factor or tasks added as "risk". See the risk section below.
Bear in mind that this is not bid per requirement, except maybe in a very large software effort. Design is melding many requirements into one result. A power supply or memory system may have few or many requirements, but the tasks to develop the system are more dependent on the difficulty of combining (at times contradictory) requirements than the total number or difficulty of any specific requirement. In this method, you estimate tasks. Executing a series of tasks enables you to comply with a set of requirements. PMs may want to know exactly what compliance with each requirement will cost. This can be done, but is very time consuming to estimate. Group sets of requirements together to make the situation tractable.
Meetings, reviews, telecons, metrics, tool training and other administrative actions need to be included as tasks using this estimate method. All that stuff adds up, and is all in the wrong direction. The Task/Method method forces you to account for all of these by hand. It can be difficult, like naming all the states of the USA in 10 minutes. Try it. It is hard.
Microsoft Project is very good for this, and allows you to assess other aspects of completion while you do it, but spreadsheets work just about as well. In any event, the output must integrate with the financial system, so be sure that your method produces the right categories for the financial people to run their numbers. Or you will have to do it over.
This method works better as the tasks get smaller. About 40-80 hours per task is the largest number using this method. Any task estimate above 200 hours using this method should be suspect, and checked against one of the other methods in this article.
This is what it is worth to the "industry" to do the work. It is very rough, but it is a pretty important indicator of what your estimates should be. If you are estimating 400 hours for a task, and a competitor offers to throw it in for free on an order of 10 competitively priced units, then one of you does not understand the problem.
I think this is what PMBOK calls Expert Judgement, but the standard appears a little shaky on this subject.
The sales people have a pretty good idea of what their customers will pay for something, at the top level. That is their job. They provide this number to the proposal manager.
The proposal manager then breaks it down into costs for each of the elements, using judgement as what the percentages should be. There is some discretion there, but it is not has hard as it sounds. Some of the costs will be relatively fixed, simplifying the calculations. Once the budgets are determined (Bogies), they are provided to the functional managers to determine if the spec can be met given the funding available. The managers then respond with a plan to do it for that, if it is possible. So in this case, the estimate is given to the managers, for them to see if they can do the job for that.
Be sure to ask what the customer will pay. If you just ask what the sales people think the development cost should be, you will get some weird number. This is because the lower the development cost, the lower the amortized cost, and the wider the range of customers that will find the product attractive. But they know development is necessary, because that will create a hurdle that will disqualify competitors. So if you ask them what they want, they will go schizo on you.
I never met a sales guy that thought a product development should cost more than $50k, and I have never seen a product developed for $50k that customers found attractive for more than a few months. But it is a holy grail they all strive for.
You can also go out for competitive bids, to see what a subcontractor would charge. You can look at this as the market cost of the task. Don't forget you would have to manage them. The rule of thumb for Subcontractor Management Expenditure used by Grumman, Lockheed, and Boeing is:
if you are using a subcontractor for the first time, you will have to match their engineering effort hour for hour.
So if you are having a battery pack modified by a new contractor, and they are going to use 40 of their engineering hours to do it, you will have to expend 40 engineering hours managing them. Of course, the reason you are subcontracting this effort is that if you did it yourself, it would cost much more, since they have a design basis and you don't. Once both parties are familiar with each other , this can drop to 50% or even to 25% of the subcontractors engineering effort. But don't overlook this, this is a hard metric that has been verified many, many times. Part of this may be due to subcontractor control bias. It is there, don't forget to include it in your estimate.
Be sure to account for how your company loads external activities and contractors. This varies from company to company, and could include the subcontract management effort, causing you to double book this cost accidentally.
This may be researched as part of a make buy effort. Don't be shy about finding out what a contractor would charge for the effort, this is valuable information to put your estimate into perspective.
The most complete implementation of this is Evidenced Based Scheduling. It is very modern method for projecting completion dates and costs by using historical start and end dates of similar tasks and individuals to generate simulations and probabilities. Sort of like simulating the next football season results.
When a task is assigned, the engineer is asked for an estimate, which is captured by the tool. When the task is completed, the data is logged into the tool. A track record for the engineer is built. Correlations can be calculated for accuracy, variability, and offset when compared to the actual effort expended to accomplish the task.
This model is used to predict the engineers ability to estimate and execute what he says he will. So if an engineer is found to typically estimate one week short, +/-50%, the program will use this to build a model for the engineer.
Simulations of the project are then run to project the probability of outcome, given dependencies, resource availabilities, etc. So you get estimates, and probabilities of meeting those estimates, with distribution. Something like, "It has a 93% probability of being done by August 15".
Note that this method knows nothing about the work to be performed - only what the developers original assessment is, and what the developers track record has been.
I haven't used it, but it sure looks like it would work. You can find it implemented in a bug tracking tool called Fogbugz.
For this to work, you need many, small, similar tasks (the lower limit is 6 for each developer, I think, for complex mathematical reasons), so it is mainly applicable to coding changes in larger efforts. And an environment change would affect it. But an interesting concept.
All estimates include some accounting for the unknown. PMBOK refers to this as Reserve Analysis. Sometimes this just a rule of thumb. Sometimes it is called "fat", or "padding", or more neutrally, "contingency". But it had better be there.
A quantitative way to estimate for risk is to include estimates for the risks involved with probabilities and impacts to others. Estimates then look like this, using a software task as an example:
So the base estimate is 100 hrs, the worst case is 210 hrs, and summing the probabilities adds a reserve of 15 hours.
For the third risk above, there is an IMPACT to the rest of the design which the estimator has identified, but cannot estimate. This is a flag for others to include this risk in their estimate.
Sum all the hours, and all the probabilities. Then you can generate a minimum value if everything goes as planned (a statistical impossibility), a weighted probability value, and a worst case value (which is also a statistical impossibility). The risk budgets are then quantitative.
You do not know how these risk hours will be allocated. Managers will be trying to try to work to their budgets, in the case above, the original 100 hours. All the risks are summed into a general budget, and applied in quantum lumps as the problems appear. For example, if you are doing 10 protocols with 10% probability of one of them not working out, then you better have a budget to do one of them over.
This allows the reserve to be created across the project. The effect of something not panning out is defined and can be allocated for. So if something goes wrong, theoretically, the PM already has a budget to cover that. If you bid the risk properly.
By applying the expected manloading to the tasks, and their interaction by linking dependent tasks, schedule probabilities can also be calculated by running simulations. This is pretty involved, but is the basis of several modern scheduling methods.
PMBOK calls out a similar method, and there are probably many more ways of quantifying risk. The one above is scalable and easily understood, and also folds well into risk management plans.
You can read more about reserves in this article: The Paradox of the Reserve
When I was a manager of departments, I always estimated and managed to 10% of the effort of the people who reported to me. But I never really verified this to any extent. Globally it worked, and I believe it is common practice. This is for functional management.
Project Engineering/Program Management: When I started in engineering, the Project Engineer/Program Manager (the guy who made sure the product met the spec) was simply as a fixed level of effort. She would be on the job full time from day one to delivery. On big jobs, this didn't always work.
I learned the hard way to estimate Project Management of a Parallel Development at 10%.
I was the Project Engineer of a large program. The customer was delighted at our plan and estimates, and kept adding lots of functionality and other goodies. What started as a good sized (10 engineer) job ended up as 20 equivalent engineers full time on the job, the name run hit over 40 its peak. Clearly, this was too much for one person (I know that now!).
If the Program Management task had been bid at 10% of the engineering effort, it would have been clear that help would be necessary at the very start, and it would have made things much easier.
If you are doing a serial development, you do not need to bid at 10%. Instead, a estimate for the cost up front for setup, and level of effort works fine for this.
This is estimated on a calendar basis, like rental of a parking spot or a staffing a bank teller. You pay for it, whether it is used or not. This is easy to bid - duration of the job times the percentage of the day someone spends on it. Avoid this if you can, it is not good for most stuff. Implicit in it is that the work is scheduled to fit the resource availability, rather than the other way around. But you will see it in heavily Resource Constrained environments. For instance, you may set up for support, that fielding the product will require X hours per week, indefinitely. Someone will be available. They may get backed up, they may have nothing to do. But you need someone to "answer the phone".
You can estimate a job this way - X is full time, y is half time, etc. But this implies that the effort depends on the personnel available, and other factors (schedule, functionality) are dependent variables. This is typical for standard products, developed by experts, in resource constrained environments.
All estimating results should be captured and organized so the result can be assessed and understood, for comparison with actuals and other estimates, and preparation for the turn on of the effort. The Basis of Estimate is a very valuable document, and I cannot overemphasize how important it is to keep track of exactly what the final estimate provided was based on. The version of the requirements, the date, interlocking assumptions, and all the factors that may be twisted at the end of the estimating process to develop a final number. These estimates are valuable work, and you should set aside some time to develop a method of standardizing them to integrate with the work being performed, and the financial system they will be integrated with. If you have a good basis of estimate system, then you can quickly evaluate whether a previous estimate is still appropriate, or useful for estimating a new task.
For complex reasons I won't go into here, the customer will sometimes fact find the estimates. This requires your company to show justification for each task bid. If you have a good Basis of Estimate, this will be no trouble whatsoever.
Comparison to Actuals is the preferred method for justifying pricing during fact finding. Task/Method is OK for a percentage of the tasks, but will give the auditors a headache if it is used too much. Your estimates should be Current, Accurate, and Complete. When you support fact finding, be ready to address how each estimate meets these criteria.
Usually this is run by the financial people, and the estimators are brought in for questions. After fact finding, the Program Managers will go to negotiation.
The fact finding process is not kind to a general risk budget, never mind that this is a common practice. So if you will be fact found, you will have to include the risk in each of the individual estimates. This is also necessary because some of the work may be eliminated during negotiations, and the risk budget should go with it. Risk funding should be placed back in a general risk budget when the contract is awarded, either at the top level, or with the Cost Account Managers.
This is also known as Cost-Reimbursable. This method is used to coax engineering firms into accepting a task that has so much risk it cannot be accurately estimated (a least that is the positive view). Actuals are billed to the customer, and a fixed fee is paid based on successful completion of defined milestones. So a $100k job with a $20k fee could balloon into a $900k job with a $20k fee. Or it could be completed or stopped after $10k, and you get your $20k fee. Sometimes there are incentives. Cost Plus contracts also may specify exact individuals, percentages of time spent on the job, and other conditions. There is formal reporting of expenditures, with triggers for reports and reviews. An example is at 75% completion, you are required to develop and deliver an estimate at completion, given the current requested tasks.
Still, this contract is at least nominally based on an estimate of the expected effort. You need this to determine manloading, and negotiate the fee. Level of effort is usually how these are estimated. Technical progress then becomes the dependent variable.
I always hated cost plus contracts - it was difficult to have an intelligent conversation over whether or not to do something (won't they pay us anyway?). So the plan was very fluid, and discipline was difficult. There was always a day of reckoning, and it was never pretty. But it is the only way to handle jobs with customers who do not have a spec, and want to direct the daily activities of the the engineering staff. And some companies love them.
By the way, any work that is performed by your organization is inherently cost plus. This is why some Program Managers prefer to go outside for everything, where it is "fixed price". If you don't change the statement of work. Ever.
Prototyping to produce an manhour estimate probably sounds a little strange to an engineer, but operations people do it all the time. They measure the time required to perform each step of a manufacturing operation, examining the efficiency of each step as they go. They then apply a learning curve to the estimate to take into account that operator becomes more facile and the time required for each step drops to some minimum value.
If you are doing many, similar engineering tasks, like designing forms, reports, or operator interface screens, or verifying compliance to many similar standards, it may be worthwhile to prototype an effort to determine an estimate. Set up a tool to carefully catalog the time required for each step. Then use this data to project the overall scope of the effort, including classical factors for learning.
You can do this by performing the following procedure:
place right hand 12 inches in front of left shoulder.
move right hand to 12 inches in front of right shoulder.
move left hand to 12 inches in front of your left shoulder
move left hand to 12 inches in front of your right shoulder
Repeat the above until everyone agrees with your estimate
This may be improved by using a laser pointer, or a white board and marker. Don't expect estimates of this type to do well during Fact Finding.
The best estimates compare several methods to come up with a reasonable number. You should at least us two methods, one for calculation, one for verification.
Actuals should be used for familiar tasks, and Task/Method should be used for anything new. For small things, use Task/Method. For large efforts, use Actuals. Develop metrics, and improve them. Most experienced estimators understand that these methods are not exact, and are combined with experience and other expectations of the future to produce the final estimate. Level of effort and Expert Judgement should be only be used like roofing tar on a shingled roof - a little bit, here and there. And metrics need to be based on your organizations actual performance, not on external measurements - unless you emulate the conditions that created those metrics exactly.
Also, these are estimates, with risks - not Bids. What's the difference? Look here.