This is a general page on Methods of Requirements Management. Below are the most common methods, which are used in various combinations to design electronic equipment.
This is not an exhaustive paper on the state of the art, but is only meant to give you a reference into the field if you are not familiar with the concepts and terms.
You cannot develop useful requirements without knowing what they will be used for.
There are three main reasons to develop formal requirements. One or all of these may be important to your project, and as such, you need to keep them in mind as you develop your requirements. For more reasons, here is a more complete list.
1) Generation of Requirements is a Good General Work Habit
The act of defining requirements can uncover conflicts and/or simplify difficult aspects of a design. This activity allows the project team to identify gaps and focuses discussions between stakeholders and the customer. Requirements can be used to allocate responsibilities and distribute tasks and efforts. And working out your requirements first makes things go a lot better, and can help you avoid disaster.
In this case, you are making the purpose of the effort clear to yourself, your customers and the stakeholders.
2) Requirements Provide a Formal Method of Defining and Verifying the Performance to be Achieved or the Work to Be Done
Usually this is done using the Shall/Will/Should convention.
Shall is something the product must accomplish. The gadget shall weigh less than 5.6 lbs.
Will is something that the customer or other entity will provide or accomplish. Exxon will provide as built drawings of the structure in *.dxf format.
Should is a goal, to be accomplished if practical. You get brownie points, or may even get a few bucks for it. First article for software integration should be delivered in final housing. This is not a requirement, but it is preferred. More on this below.
Verification of all the “Shalls” provides exit criteria and a method for grading performance. Each "shall" drives a specific test that has a go/no go criteria that enables quantitative closure criteria. In this way, exact contract performance can be verified and recorded.
This is a method to turn expectations into formal contract statements that can be evaluated for acceptable completion.
3) Requirement Definition is the basis of a Method to Improve Schedule Performance
Mutually accepted requirements enable multiple efforts to work in parallel, which can drastically reduce development time. In fact, the more designers involved, the more important requirements become. For a complete write-up and examples of this, look here: Using Requirements to Drastically Improve Schedule Performance. These requirements are to support multiple parallel development paths, by obtaining agreement (or at least acknowledgement) of all parties participating in the design. Working efforts with many designers in parallel without formal requirements is a hazard.
In this case, you are making a contract with the designers as to how each of system segments will fit together at integration to perform the final functions.
HAZARD ALERT : During creation/agreement to requirements, there will be pressure from two sides:
One side will want the requirements to be as loose and general as they can get them. You will be lobbied for as much "wiggle room" as possible. The hazard is that a specification that is too general is useless. If the requirements don't control anything, they are not serving any of the purposes above.
The other side will try to lock in everything, making all decisions totally permanent, so they can order boxes of screws tomorrow. The hazard here is that you may disqualify better solutions by locking in detail that is not necessary to meet the requirements. At each level of design, control should only go so far down. You must allow the next level of design to make their contribution.
Not familiar with design levels? You can brush up here on the concepts needed to determine the right level of detail in your requirements document: System vs. Design vs. Implementation.
The key here is balance. Find the right level for the requirements for the system level of the task. Expect trial and error, you may want to have a frank talk with your ego before you start.
It is right to limit the NUMBER of requirements - but make each one count. Remove trivial requirements (e.g., "Unit shall be a computer based electronic assembly") whenever possible. A useful requirement has backup.
Depending on the type of project, requirements come in from three directions. It is a rare project that does not include some requirements from each of these sources.
1) Your customer, usually via a formal contract.
These come in by one or more of the methods discussed below. These have varying degrees of fidelity, and may continue development after you have started work. They come in from one channel (usually). The vehicle may be a formal specification, negotiations, acceptance of a proposal, etc.
2) Sales, Marketing or Program Office by direction
These may be formal, verbal or meeting minutes, or a hand wave (that looks like s**t! Fix it!). Or they may be value based requirements (e.g., Key Performance Indicators), which you are to put in as much value as you can (speed, weight, clarity, etc), in a fixed time or budget. Or they may be by direction e.g., we invested $2M on that processor core, that is the one we are using. These can come in from many orthogonal directions, and you may have to bring together people who would rather be unaware of each others existence to resolve requirement conflicts.
3) Engineering or Operations
Engineering and Operations will probably be the source of a few requirements. The customer or the Program Office may not be too concerned about these, but once you define these requirements, you must stick to them. Typically these requirement methods are support and update methods, interface standards, construction methods, test access, etc. These come in from multiple sources, and responsibility for them may be formally allocated to different engineering groups by your organizational structure. These generally are to make sure that the product produced is compatible with your organizational capabilities or capital structure.
Surveys performed by engineering or operations may be the basis of some of these requirements. Units will be installed five feet below ground level in Florida, so need to be completely submersible, as an example.
Of course, there will always be squabbling between these stakeholders. A product designed on speculation to an internal budget will get most of its requirements (and squabbling) from 2) and 3), while a contract from a customer for a specific item will be primarily 1) and 3) providing the inputs. If you have a sophisticated customer (like Boeing or IBM), the requirements generated by 3) may be very small, as every detail has been carefully layed out for you. In this case, system requirements management is more of a bookeeping and verification task than a creative exercise.
There are several schools of system engineering, and each has their own convention for publishing and management of requirements. I discuss them here: Schools of System Engineering. In this article I will use the Shall/Will/May school, but the info below is generally applicable to all schools.
Below are some of the ways that requirements are defined in complex programs. There are more, these are the ones I have put into practice.
Requirements apply to either:
the performance of a product, in which case they are contained in a Technical Specification
or
the work or service to be performed, in which case they are contained in a Statement of Work.
While technical specifications state what the finished Product must do, a Statement of Work (SoW) states what you, your company, or your contractor must do. Sometimes these functions are combined into one document, but in general it is easier to work with them if they are separate. For example:
"the product shall contain two Ethernet Ports" would be in the Technical Specification Document,
"your company shall test each Ethernet port prior to shipment" would be in the Statement of Work.
The SoW usually contains:
quantities and delivery dates of products and documents,
reviews to be performed with the customer and others,
analyses and other work products to be delivered
travel,
warranty,
support,
how tests will be run, etc.
Quality provisions and activities are work items, and should be defined in the SOW. Prototype deliveries and their states of maturity should also be included in the SOW. It is also a good place for the customer to specify what testing they will be performing, that you must support (example, engineering support for field trial, two weeks, Nome, Alaska).
Production units are usually covered by the Purchase Order. which may reference the SoW, although most ERP systems are set up for component part numbers. Sometimes prototypes and work products are specified directly on the purchase order. This does not work very well. It is better if theses are specified in the SOW, where there is more flexibility to describe them more completely.
A statement of work is always a text document.
The technical spec may be a text document, or a technical drawing package, in paper or as computer files or databases. I managed one program where the technical spec was a test procedure the finished unit had to pass. But the SoW is always text, at least the ones I have seen.
Classically, the technical specs should not contain work statements, and the SoW should not contain technical requirements. But usually there are a few of each that get into the opposite document. You can handle this with your compliance matrix. And of course, some of these are developed in process, as the product and tasks are better understood.
The requirements unique to a design are usually contained In a formal requirements document. In these documents, Shall, May, Will, and, Should have very specific meanings.
Shall: It is the designers responsibility make the design meet this requirement, and formally verify it as part of product acceptance
Will: What the customer takes responsibility for, or external events that will happen. Your project may depend on it, but you do not make it happen.
Example: You Shall test the product. The customer Will provide the test equipment to be used.
May: Requirement is optional, or operation is permitted, but not mandatory.
Should: Goal To be achieved if possible. Typically backed with financial incentives.
Although this is used in most defense department work, there are other definitions. Here is an RFC that has a slightly different definition: rfc 2119, Key words for use in RFCs to Indicate Requirement Levels.
Although these requirements collections are often published as linear text based documents, they are usually generated and maintained as a database where each requirement is numbered and tracked. Each record contains a single requirement, whether the final product will comply, who is responsible, the method of verification, and probably links to other requirements in higher or lower level documents. The linear Word document you receive is probably a report from this type of database. This is usually implemented by commercial tools such as Requisite Pro, DOORS, or others. Home grown tools are also used, and the web provides even more options. It is common for the receiver of the text requirements document to blow it back into a database, and all of these tools have a semi-automatic process to create a requirements matrix using the definitions above.
This method works well if requirements are spread out in several documents. We always found some technical requirements in our Statement of Work documents, and some contract performance requirements in our technical specs, and there were a few more laying around elsewhere.
When you buy DOORS, or an Access database collection, like recdb, the linking can become quite sophisticated and powerful. These tools have links to email, tasks, project management software, configuration management software, customers requirement management system, etc. People buy these tools for a reason. As jobs get bigger, the need for these tools gets greater. Some general project management software packages come with simple requirements management built in.
This matrix can also be expanded into a Verification Cross Reference Matrix (VCRM), or a requirements dictionary, which includes information and references to capture the intent of each requirement (i.e, the unit will be typically be handled and carried by the power cable during installation). Linking is an important part of the matrix.
This method has its limitations, but in general, is a pretty good way of defining and executing most of the design requirements.
Sections of specifications are often authored by different departments (Software, Applications, Availability, Quality, Logistics, Thermal Management, etc), and assembled into each component spec for printing and publication. So your technical or contract interface may be able to accept changes in some parts of the requirements documents, and have to bring in other departments to address comments or changes to other sections. This is typical of Quality, Test, Logistics, and other support functions.
Large projects have tiers of specifications, for hardware, software, envelopes, operator interface, etc. Interface Control Documents (ICDs) define the interfaces between design elements. Technical requirements are broken out by higher level designers, sometimes exactly as written in the top level spec, to allocate the requirement to subsystems.
These are usually referenced in the technical requirements documents. These may or may not follow the Shall/Will/May format described above, or may be released by the industry as guidelines, but required of your equipment. Make sure you get their applicability straight, this is a place where it can get weird, as many guidelines state that they are not written to be used as requirements, but the customer wants to invoke them as such.
Although it is tempting to require compliance to the "latest revision" of one of these documents, this would mean that a new release of a standard could invalidate your design, even if it is completed and installed. Requirement documents must call out the EXACT ISSUE of each industry standard referenced (eg; Revision C, 4/17/2016). If the standard is updated, the update can then be evaluated against the existing design to see if changes are needed to bring it into compliance, and to determine the costs incurred, if any.
Standards referenced are usually summarized in a table at the front of the document. Here is a sample of some words to go with your table.
The tables in this section specify the exact issue of the documents referenced in this specification. The document title or base number may be used elsewhere in this document to refer to the standards referenced in this section.
So if you want to see what revision of the document applies, you jump back to the reference documents table.
You may see the reference documents broken into several sections. This is done to assign responsibility to specific paragraphs in the document to functional organizations.
In general, there will be three libraries of documents. Place these in a common area to prevent designers from grabbing a copy from somewhere, and working to the wrong standard. These libraries must be managed, as they will change periodically. They also will have their own compliance matrices, and the Standards and Specifications may have to be purchased.
Example:
Many times a customer may reference an industry standard that is not yet stable. Or they may not want to use the latest version of a standard, because they do not approve of something in it. If you are modifying a piece of equipment that was delivered to Rev B of a standard, then Rev F of the standard would not be of much use during design Unless you are upgrading the equipment to meet Rev F. This is normal, welcome to the real world. Recognize the risk, and and whether you or the customer is taking this risk, and bid accordingly.
Some customers will have their own standards, which have varying degrees of accuracy and usefulness. Be sure to flow these to your subcontractors if they apply to the work that will be performed by the subcontractor. Although it is tempting to just send all the customer standards you receive to your subcontractor with instructions to comply, and let them figure it out, it is usually a good idea to write a Statement of Work for your subcontractor which references specific sections to make sure that that the scope of work required is clear. This is especially true if your subcontractor is used to working to different standards, and now must work to your customers. And your customers standards may only cover part of what is required to build the product, creating questions at your subcontractor about what governs the rest. You are going to have to fill in the blanks sometime. Rather than let the vendor guess, take care of this up front.
So the work a subcontractor performs is done to a combination of three sets of standards:
Your customers,
your internal standards, and
the standards in place at your contractor.
Generating a SoW for your subcontractor is pretty straightforward, and takes the mystery out of this.
Be wary, Customer Standards were generated and maintained at some cost to the customer to handle problems that occurred in the past. They may have an evangelist waiting to assist you in understanding the intent. Sometimes these people are very helpful, sometimes not. In my experience, they have been very extreme, either one way or the other. They also may address problems that have already been solved.
Most companies have their own standards. For instance, part numbering, test interfaces, disposition of reworked items, documentation, release system. Some companies have more than one set of standard processes to pick from.
If a standard is not specified, then the work will be performed according to your standard process, or for subcontractors, to theirs. If you plan on doing something else, you better spell it out.
Schematics, CAD Files, Bills of Materials, and other standard methods of data exchange have very detailed standard formats that can be specified, usually just with a single word or acronym. Here is a list. A specification may permit submission in "Contractor Format", or it may require a specific customer format. This is a pretty important distinction. Be sure to make sure that the file formats for drawings and lists is agreed upon.
If the customer has their own format for a submission, like a test procedure or thermal analysis, it will be described in a "Data Item Description", provided as part of the contract.
Why would a customer pay good money for a conversion from your format to theirs? The information is exactly the same, right?
If they are receiving and combining inputs from many subcontractors, it is a tremendous amount of work to integrate these differently formatted documents and verify all the needed information is there. And if they get you to format it so their eventual customer can use it as is, they save even more money. And there is a risk that the conversion may include misinterpretations or errors, and the customer is not in a good place to find and fix these. So having everyone convert to their format turns a very difficult problem into an easy one that is spread among many contributors (everyone: make it look like this, vs: You! - make everyone's look like this).
So it is important to check, before submission, whether the document can be in your format, or must be in theirs. And customers are willing to pay money for this, because it saves them the effort of doing it. It is far easier for you to work the technical data into their format, then for them to understand your design and convert the data.
The Proposal As The Requirements Document
Sometimes the proposal your company submits is the document you will be held to, and your requirements will be found there. This was typical of most municipal contracts we were awarded. This also includes commitments at oral presentations, Q&A records, and other transmissions. This is typical of customers procuring a semicustom system, that will be modified for use in their specific application.
This means that the proposal must be generated carefully, to avoid conflicting or unclear statements. Proposals are full of statements that say things like "speeds of up to 10Mbits/second will be possible". What is not clear from this statement is whether the product will do this when the customer gets it, or the design could be modified to do this at some future time, or whether it will only do so under special conditions. Make sure that somewhere in the proposal, it states specifically what will be delivered under the contract.
While it may be possible to work to the proposal directly, current methodology is to load the requirements into a database, and work from that. If you are lucky, you customer will go along with it, and work with you on the requirements matrix, and retire the proposal. If not - I hope you were very careful about the wording in the proposal.
Some requirements are implied by the product type. Examples:
A hand held device cannot get too hot to hold,
a device that may be accessed by untrained people must remain safe, no matter how they mistreat it,
a vehicle unit failure should not bring down the power system on a vehicle,
units that are regularly moved are going to be carried by something - handle? the cables? the knobs? a carrying case?
Some of these implicit requirements are too extensive to place in a spec. If you are new to an industry, be sure to spend some time on implied requirements. For example, an industrial hand held radio must withstand being dropped many times without requiring repair, however, most consumer cell phones do not have to survive these drops to be a successful product. The implied requirements are different.
For a close to home example, take a look at your bug tracker. How many of the bugs violate an actual written specification? You will probably find a lot of them are driven by implied requirements (the processor randomly reboots every 10 minutes or so - this is bad , right?), rather than explicit requirements.
If a customer buys a standard product, then most suppliers will only guarantee that it has been built to the construction drawings. There may be a few guaranteed specs, (such as a revenue meter to legal metering standards) but not many. It is up to the purchaser to verify that the complete product is acceptable, and meets their needs. The product delivered during production will be the same as the one the customer was supplied with for evaluation. Documentation is usually supplied under non disclosure if the customer wants to get picky, but you get the idea. The customer is ordering a product by part number. And if your company sees the need to update it to make it more attractive/producible, you notify the customer to see if they want to accept the change, or still take delivery of the older product.
Many times, a design will start with a standard product that will be modified in some way. In this case, the new features will be covered by one of the other methods listed here. But the base product is what the base product part number specified, and only the new features are covered by the specification. Everything else is from the baseline, by part number. So the spec starts out as "Product XYZ will be modified as below". Saves a lot headaches, especially since if they are getting a development system that is modified, it is probably exactly this.
The customer may try to finesse you into accepting a complete requirements document on your standard product. What is at risk here is that you will have to guarantee all the performance in the spec you agree to. If you customer accepts your product as is, then he accepts the risk (The product is what it is). If you agree to a technical spec, then you accept the risk for each of the requirement items, and must spend time and effort to verify compliance of each requirement - and the customers interpretation of verification of the requirement. It can be quite expensive, and most standard product vendors know this and avoid developing formal specs for their standard products like the plague.
Sometimes a customer just says: Do This!
This is most evident on Purchase Orders, for subjects such as the exact delivery number and dates, and implementation of options such as the software to be loaded and parts to be used.
Direction is not a performance requirement. If the customer directs you to install their software load in your unit prior to shipment, then you verify it by checking the load and saying "yep, there it is". If the customer directs you to test on their equipment prior to shipment, then check the box that says you did. They are accepting responsibility for the outcome.
They may also direct you to buy (or not buy) from certain vendors, or use parts they supply. If they do this, then they are accepting the quality level or performance of what is supplied. All you can do is identify a problem with it, although sometimes implicit in this direction is getting it to work. They may also supply a circuit or software to be used exactly in your equipment (Microsoft Windows 8.1?).
This also applies if your customer has and EMI expert, for instance, who would like to push around physical construction details like tracks on a circuit board. It is important to the PM to let her know whether the directed changes requested are in scope (neutral or an improvement), or will cause additional effort.
Work requirements, safety, what happens if your building burns down, that sort of thing. Hopefully, there won't be any technical requirements in here, but there may be stuff that affects the drawings you produce. Usually the customer and vendor have their own versions, and either one accepts the others, or they are merged in sessions you hope your aren't a part of. But be sure to check that some technical or work item has not snuck in there.
Since there are many possible methods of communicating requirements, and their content may overlap somewhat, it is important to specify the precedence of any documents with respect to each other. This is typically worded in a paragraph in the purchase order or the Statement of Work, that goes something like this:
In the event of conflict between any of the contract documents, the order of precedence shown shall apply:
Purchase Order
This Statement of Work
Technical Specification
Control and Assembly Drawings
Referenced Specifications and Standards
(Your Company) Documents and Procedures
General Contractual Terms and Conditions
For clarity, I have shown quite a few, there may be more or less. And of course, the order may be different.
For user interface state logic, this is the way to get requirements fleshed out. A use case is a test procedure for a system concept.
I can't say it any better than you will find here: Gatherspace Use Case Example
Describe requirements by creating various diagrams and simulations. Models may even respond - not in real time or to the actual scale, but complete enough to make value judgements.
Define precise behavioral requirements through test cases. I once had only the production test procedure as the specification for a power supply design.
Create mock-ups and prototypes that can be approved by the customer. Then the production unit(s) are measured against the master. For instance, use a single board computer on a development carrier to develop a prototype. When the functionality is accepted by the customer, turn back over to engineering to develop production items that perform identically.
It may be necessary for the system to continue operation for some period of time with a single point failure. These systems (like the brake system on your car) are known as dual redundant, in that there are two systems performing the function, and when one fails you are notified, and some capability remains to perform the function.
No matter how the requirements come in, Requirements Matrices are typically the way that compliance is organized and tracked. These may flow requirements down several levels, and there are many tools available to keep track of these. These tools can be monsters that demand attention similar to an email or CAD system, as requirements are flowed down. They handle difficult situations such as common modules under design for multiple users, integration dependencies, and sophisticated report and document generation. The customer may have a requirements tracking system, and request you integrate with it. Or they can be as simple as a spreadsheet or a table in SharePoint.
At the proposal level, they are just a commitment to comply or rational why this would not be in the customers interest. As the design is implemented, it can contain approaches, comments, interpretations, and links to test results and analyses, and be a pretty living document linking to all the major design decisions.
You should design your requirements matrix depending on the means and the ability of your organization.
Note that Compliance Matrices also work for Statements of Work and other documents. They are especially helpful if you are integrating your customers Quality requirements with your Quality standards, and your subcontractors.
One of the most important parts of Requirements Management is Traceability. This can be from minutes of meetings on a standard product launch, or traced through many levels of formal documents back to the source. Each requirement will have a historical origin, and much discussion will revolve around some of them which will be valuable later, when you try to remember what was intended when it was written. The origin of each requirement will probably be one of the previous methods of requirement generation mentioned in this article.
Also, you will probably have requirements that must be flowed down to your subcontractors. This matrix will state how these requirements actually end up at your subcontractors - either on the purchase order, by a separate specification, or by referencing a vendors own process which includes the specification.
Verification is proof that each specific “Shall” has been satisfied. You built the product correctly, per the specification.
Each of the requirements is then verified using one of the methods below. You can decide this early on, or it may even be specified before you start work.
1) Test – quantitative measurement under specific conditions. A special test setup may be required to create the conditions and/or observe the result, and results are singular, exact and repeatable.
Example: When the power is removed, the system is notified within 100 milliseconds. You will need a scope to measure the time between when the power drops out and the system is notified.
2) Demonstration – Blind test under relatively open conditions to see if requirement is met. Special test setups are generally not required. A number of cases are performed, and if the system performs as specified during the cases, it passes.
This may be statistical: For instance, if you have a 1% error rate specified, 25 tests without an error would be a pass. 25 tests with 1 error means that you have to run 100 cases. Two errors at 100 cases means...do the math, you get the picture.
Demonstration is also used for global requirements where the exact test conditions are difficult to control (reliability, repair time, interference with other equipment).
3) Inspection – Examination by a mutually agreed upon expert for set period of time for specific defects, usually to specific standards.
4) Similarity - Specific implementation has been previously qualified, and evidence will be submitted to show the design is substantially the same.
5) Analysis– Submissions of analysis showing the requirements will be met. This could be anything from copies of working spreadsheets and notebooks, to Formal Submissions in Customer Format. Formal submissions are usually necessary if the customer is going to fold in your analysis with others, like weight, thermal or reliability. Or they may have to present or preserve evidence of the analysis.
You will organize these by managing a Verification Cross Reference Matrix (VCRM). Each requirement will have a record which includes who will verify, which of the methods will be used, and sometimes other stuff (equipment and personnel required, functionality and maturity of supporting system elements, projected date of execution, reporting document, etc.) This could be as simple as a five column excel spreadsheet, or much more involved database linked to requirements, sources, etc..
The quality of verification is a variable you may have a great deal of control over. And it can affect the cost/quality quite a bit. Do you test at one temperature, three, or fifty? What are the functional modes during test? High rate? Low Rate? Do you verify at single points, statistically, or 100%? Do you use untrained operators to run the test, independent Verification and Validation? You can see that the quality of a test, and its cost, can vary a great deal.
And not just the cost of the test itself.
Here is the tough part:
Based on failures in your verification procedure, the unit will have corrective action implemented. This certainly costs money, and may even affect the performance of the unit. This is very important if it failure would affect the product during usage, but what if you are fixing things the customer would never encounter in actual operation? Requirements that support features that are no longer necessary, or operational conditions that will not be encountered?
Then you have suboptimized - spent time an money to make things "better", but did not actually affect the value of a product to the end user. And once the verification is has been established, you have to correct the any problems it turns up, or get dinged for not doing it. I have seen poor attention to the scope of testing take a product down. So the level of quality of your verification is very important, and should not be taken lightly. Use the verification matrix to find out what is important, and make sure that you don't over commit.
Since these are work items, any direction on these will typically be found in the Statement of Work.
In the end, any test is a method to reduce risk. As such, it has the same quality aspects of any part of the design - cost, schedule, performance, etc.
This verifies that you Built the Right Product. When placed in the operational environment, does it solve the problems it was supposed to? Many products meet the spec, but do not do well in validation (too heavy, cables get mixed up, can't understand screens, etc. ) Many other products fail to meet some requirements, but pass validation when it turns out they work well enough for effective use and safety concerns.
For standard off the self products, validation could consist of taking a product, handing to a user wise in the trade, but unfamiliar with the product, and getting her feedback. For an operator interface device, this could be accomplished by installing the unit in an actual application, and noting learning curve of a typical operator.
When you buy a product, and there is a period of time during which you decide to keep it or not, you are in effect, performing a validation test on the unit. A Formula 1 car is built to meet exact specifications dictated by the Formula 1 standard. Validation occurs during the test runs, and the final race.
Validation testing usually requires procedures with setups, measurements, etc. These are usually at quite a high level. During Validation an aspect of performance may be found to be essential and must be added to the requirements, or some requirements may be found to be unnecessary or that can be relaxed.
The requirements will probably be changing as you design. In know, in theory this does not happen, but in real life, it can be difficult to keep up with changes as they occur. The customer may come in with changes of her own, or you may suggest changes. The customer may want to capture some of your decisions in the contracting document to control the implementation once it is set, and communicate it to others in her organization.
In a typical design the requirements documents are updated periodically, and change control becomes very critical. It is typical to be working to in process changes, because the next revision of the document has yet to be released. This is done in good faith. Because this happens so frequently, a progress payment is usually tied to an updated release of the control documents, to make sure everything gets cleaned up.
Every organization must have a way to receive, accept, integrate, and incorporate changes based on requirements updates. This is where the real power of Requirements Management tools are in evidence, like Requisite Pro or DOORS. This includes internal products, where there may be factions that are trying to get the product to go in new directions without alerting anyone. Once a baseline has been established, a record of the updates must be kept, either to determine if scope has changed, or to reconcile the development costs and performance.
The Business manager may tell you not to start work on verbal changes until the company has accepted them, or he may want you to start right away. This is a difficult decision, and involves a tradeoff between schedule, budget and customer relations. Some program managers don't want you to implement the change before negotiation, because it takes their leverage away. Some believe that with the changes implemented, the customer is over a barrel - as long as you have proof you were directed to make the change. The first way watches the hen house, the second way is proactive.
By the way, when Walt Disney bid on WWII training movies, he submitted the finished products, rather than paper. If they didn't like the price, just send them back. Now THAT'S being proactive!
Performance is related to requirements, but is not the same thing. Requirements are very important on something built to contract for specific purpose. Performance is more important for a standard, multiuse product to succeed.
Look here for a discussion of the difference.