Definition: This parameter is defined by everything that the customer values, but is not specifically covered in the formal specifications. If this is a standard product, then the formal specification may be quite brief, and a customer may be won by the number of frames/second, or small memory footprint, or other parameter it can provide under their specific operating conditions. Generally these are hard to specify exactly, and are not the same for each customer. Most customers will very much appreciate good performance, because it takes risk out of their requirement decisions.
Requirements and Performance are frequently confused. Look here for a writeup on the difference.
As a designer, technical performance improvements can come in many areas -
An unspecified or difficult to define parameter may be very desirable, like reliability, configurability, serviceability, or quality of support.
Required areas of performance may have wide buffers (heavier than required material, good access to connectors, etc. ), taking risk out of possible changes and unexpected situations.
The performance may be maintained over a wide range of conditions, not just the test suite.
Spare features may allow reuse in additional applications.
High quality formal test reports, allowing customer testing to be eliminated or reduced
Durability, and resistance to damage by accident, during training, or abuse
Better user interface, easing acceptance
Some Examples:
My old Droid X was very nice, but the Bluetooth had poor performance. It did not work very well when the WiFi was on, and the audio volume was very low at times. I am sure it met the specification, but was not very usable in certain situations. So I had to be prepared for it not working well at any time, and thus, didn't use it much.
We built a box for an aircraft that slid into a rack. There was a handle on it, and the requirement was that the handle was only for removing the unit from the aircraft, not carrying the box. Unfortunately, it was placed such that it was convenient for carrying the unit to and from the aircraft. But it was not designed for that, and we got a lot of units back with broken handles. Change the requirements? The box was too heavy to be carried by one hand, per MIL standards. Operators were supposed to use a cart, but seldom did. We ended up redesigning the handle to pass the stress of carrying it, and flinging it up on a bench, as we had seen them do, and tested to meet that stress. Use of a handle is something easy to specify and test, but performance is important because you never know how a handle will be used in real life. Respecting handle requirements is not something you can depend on.
Current, which designed Broadband over Power Lines equipment, discovered that the market really wanted remote meter reading over power lines. Although technically similar, performance improvements to BPL (say doubling the data rate for 50% of the users) was no longer valued. What had to be implemented was a basic, hard set of requirements. Data transmission performance was not as critical as 100% compliance for all meters under very specific conditions. A different world.
Sunlight readability. Don't get me started. Performance varies wildly, and is very complex to specify.
Designing for performance
In order to design for performance, you need a good understanding of the actual usages of the product, and the freedom to pounce on high value implementations. The circuit and component designs are usually based on flyoffs between differing approaches, and some formal trades will probably be performed. Weighting should be applied to the parameters that have value, so they are addressed. Sales and Marketing may even use formal methods for determining the value of performance improvements, and be able to provide inputs on the worth of a feature in terms of additional sales or profit margin.
In custom designs, reuse was always in the Sales peoples minds. To them, the addition of a peripheral could mean that we would have an off the shelf product that exactly met the customers needs. So that third Ethernet port was very important, knocking hundreds of thousands of dollars out of the next design (theoretically, but hey, that is where you start).
Technical Performance is the direct enemy of Recurring Cost, during a design to cost. In a DTC effort, you are trying to cut out everything that will bring the unit under the target. Adding features and making the product more robust does not help that. It will also hurt your engineering budget, as there is simply more to do. If you throw in a feature, you have to test it, whether the current customer uses it or not. So when performance is critical, expect recurring cost to be a constant concern. In a Value based effort, it is the ratio of Technical Performance to Recurring Cost that is important. It means more different kinds of customer will buy it.
HALT and HASS are a formal method to quantify certain aspects of performance of a product. These tests take a product far past its design limits to see when it breaks. Corrective action can then be considered to improve the product performance. The goal of these tests is to discover changes to improve the reliability/durability/life of a product.
Technical Performance does not affect Specification or Process very much, unless the customer discovers an aspect of Performance they would like to insure. Then it is folded into your Specification or Process (we discovered your product works way past 60C!! Can you guarantee this?). It may affect the Schedule, in that a brass ring may turn up that the customer wants to formalize, and you have to stop to run tests or analysis to make sure you can reliably provide the performance they are looking for. This is very hazardous, if you are trying to be rigorous.
Resource Constraints? Most people are not good performance designers. Engineers know what they like, but often have trouble understanding how the product is used or the long term effects of their decisions. There may only be a few people in your organization that can make these types of decisions successfully. Look for a track record, everyone thinks they know best how to really soup up a product, but few do. A success is good, a failure followed by a success is even better.
Process - here is the sticky point. If a customer likes the performance, is it guaranteed? Do you test every unit for it? How do you decide to set the limit? A commercial unit may have only five requirements, but the customers love it because it has a some performance aspects that they really have to have. How do you figure this out? They may not even know. Do you start testing them for it? Do you test features that a customer doesn't use? What about if we throw in expandability, like an extra card slot? Is that verified, and how?
I have produced designs that performed so much better than the spec that huge manufacturing errors were not discovered. A unit with a missing signal in a twisted pair still worked well enough to pass testing and most applications for years.
Works the other way, too. Lets say our Spec for power consumption is 20W max, and we run 5W typically, which our customer loves. It keeps his power bills down. A batch tests at 19W. Ship it?