Home Page‎ > ‎

Common Testing Pitfalls

Topics:

Testing Pitfalls Book

My most recent technical book, Common System and Software Testing Pitfalls, documents 92 commonly occurring pitfalls organized into 14 different categories. With so many testing pitfalls, this taxonomy of testing pitfalls was quite comprehensive. However in spite of this, additional pitfalls and even seven new categories of pitfalls have been identified as I have continued my research into testing pitfalls and more testers have read this book, compared it to their personal experiences, and provided me with their input. Because significant additions must wait for the publication of a second edition, I have decided to provide these web pages where you can read about major additions and modifications to my taxonomy of pitfalls.

Click cover for more information from the publisher Common Testing Pitfalls and Ways to Prevent and Mitigate Them: Descriptions, Symptoms, Consequences, Causes, and Recommendations (SEI Series in Software Engineering), Donald G. Firesmith, Addison Wesley, 29 December 2013, pp. 256, ISBN: 978-0133748550.

To learn about these 92 original pitfalls, you can purchase the book from (and leave a review on):

New Pitfalls and Pitfall Categories

With the addition of new pitfalls, there are now 162 pitfalls divided into 21 categories. The following is intended to document the currently-identified common system and software testing pitfalls. The individual pitfalls without links are documented in detail in the book, whereas the pitfalls with links to pitfall-specific webpages are new draft pitfalls that will be included in the second edition of the book.

Note that because these webpages are currently undergoing major updates to match the book's up-to-date second edition manuscript, these webpages are incomplete. Some of the pitfalls on this webpage are missing, and the links to many missing webpages for new pitfalls are broken. I will be updating these webpages as rapidly as practical during the next couple of months.

Technical Review of New Pitfalls and Pitfall Categories

Unlike those in the book, these new testing pitfalls and pitfall categories documented in this website have not yet undergone extensive technical and editorial review. They are therefore subject to change, and some of them are still incomplete. I am currently looking for reviewers to help me mature them so that they can be added to the second edition of the book. Please email any comments or recommended changes and additions to dgf(at)sei(dot)cmu(dot)edu, and I will consider them for publication both on this website and in future editions of the book.

Categories of Testing Pitfalls

The testing pitfalls have been organized into the following categories and subcategories:

  1. General Testing Pitfalls:
    1. Test Planning and Scheduling Pitfalls
    2. Stakeholder Involvement and Commitment Pitfalls
    3. Management Pitfalls
    4. Staffing Pitfalls
    5. Process Pitfalls
    6. Pitfall-Related Pitfalls [New Pitfall Category]
    7. Test Tools and Environments Pitfalls
    8. Automated Testing Pitfalls [New Pitfall Category]
    9. Communication Pitfalls
    10. Testing-as-a-Service (TaaS) Pitfalls [New Pitfall Category]
    11. Requirements Pitfalls

  2. Test-Type-Specific Pitfalls:
    1. Executable Model Pitfalls [New Pitfall Category]
    2. Unit Testing Pitfalls
    3. Integration Testing Pitfalls
    4. Specialty Engineering Testing Pitfalls
    5. System Testing Pitfalls
    6. User Testing Pitfalls [New Pitfall Category]
    7. A/B Testing Pitfalls [New Pitfall Category]
    8. Acceptance Testing Pitfalls [New Pitfall Category]
    9. System of Systems (SoS) Testing Pitfalls
    10. Regression Testing Pitfalls

General Testing Pitfalls

These general testing pitfalls are not primarily specific to any single type of testing.

Test Planning and Scheduling Pitfalls

  1. No Separate Test Planning Documentation (GEN-TPS-1)
    There is no separate testing-specific planning documentation, only incomplete, high-level overviews of testing in the general planning documents.
  2. Incomplete Test Planning (GEN-TPS-2)
    Test planning and its associated documentation are not sufficiently complete for the current point in the system development cycle.
  3. Test Plans Ignored (GEN-TPS-3)
    The test planning documentation is ignored (that is, it becomes “shelfware”) once it is developed and delivered.
  4. Test-Case Documents as Test Plans (GEN-TPS-4)
    Test-case documents that document specific test cases are mislabeled as test plans.
  5. Inadequate Test Schedule (GEN-TPS-5)
    The testing schedule is inadequate to complete proper testing.
  6. Testing at the End (GEN-TPS-6)
    All testing is performed late in the development cycle; there is little or no testing of executable models or unit or integration testing planned or performed during the early and middle stages of the development cycle.
  7. Independent Test Schedule (GEN-TPS-7) [New Pitfall]
    The test schedule is developed independently of the project master schedule and the schedules of the other development activities.

Stakeholder Involvement and Commitment Pitfalls

  1. Wrong Testing Mindset (GEN-SIC-1)
    Some testers and testing stakeholders have one or more incorrect beliefs concerning testing.
  2. Unrealistic Testing Expectations (GEN-SIC-2)
    Testing stakeholders (especially customer representatives and managers) have various unrealistic expectations with regard to testing.
  3. Assuming Testing Only Verification Method Needed (GEN-SIC-3) [New Pitfall]
    Testing stakeholders mistakenly believe that testing is always the best and only system or software method that is needed.
  4. Mistaking Demonstration for Testing (GEN-SIC-4) [New Pitfall]
    Testing stakeholders mistakenly believe that demonstrations are a valid type of testing.
  5. Lack of Stakeholder Commitment to Testing (GEN-SIC-5)
    Stakeholder commitment to the testing effort is inadequate; sufficient resources (for example, people, time in the schedule, tools, or funding) are not allocated the testing effort.

Management Pitfalls

  1. Inadequate Test Resources (GEN-MGMT-1)
    Management allocates inadequate resources (for example, budget, schedule, staffing, and facilities) to the testing effort.
  2. Inappropriate External Pressures (GEN-MGMT-2)
    Managers and others in positions of authority subject testers to inappropriate external pressures.
  3. Inadequate Test-Related Risk Management (GEN-MGMT-3)
    There are too few test-related risks identified in the project’s official risk repository, and those that are identified have inappropriately low probabilities, low harm severities, and low priorities.
  4. Inadequate Test Metrics (GEN-MGMT-4)
    Too few test metrics are produced, analyzed, reported, or acted upon, and some of the test metrics that are produced are inappropriate or not very useful.
  5. Inconvenient Test Results Ignored (GEN-MGMT-5)
    Management ignores or treats lightly inconvenient negative test results (especially those with negative ramifications for the schedule, budget, or system quality).
  6. Test Lessons Learned Ignored (GEN-MGMT-6)
    Lessons learned from testing on previous projects are ignored and not placed into practice on the current project.

Staffing Pitfalls

  1. Lack of Independence (GEN-STF-1)
    The test organization or project test team lack adequate administrative, financial, and technical independence to enable them to withstand inappropriate pressure from the development management to cut corners.
  2. Unclear Testing Responsibilities (GEN-STF-2)
    The testing responsibilities are unclear and do not adequately address which organizations, teams, and people are going to be responsible for and perform the different types of testing.
  3. Developers Responsible for All Testing (GEN-STF-3)
    Developers are responsible for all of the developmental testing that occurs during system or software development.
  4. Testers Responsible for All Testing (GEN-STF-4)
    Testers are responsible for all of the developmental testing that occurs during sys-tem or software development.
  5. Only Testers Held Responsible for Quality (GEN-STF-5) [New Pitfall]
    Testers are (solely) responsible for the quality of the system or software under test.
  6. Testers Fix Defects (GEN-STF-6) [New Pitfall]
    The testers debug (diagnose and fix) the defects they find in the object under test (OUT) instead of merely reporting them to the appropriate developer(s).
  7. Users Responsible for Testing (GEN-STF-7) [New Pitfall]
    The users are responsible for most of the testing, which occurs after the system(s) are operational.
  8. Inadequate Testing Expertise (GEN-STF-8)
    Some testers, developers, or other testing stakeholders have inadequate testing-related understanding, expertise, experience, or training.
  9. Inadequate Domain Expertise (GEN-STF-9) [New Pitfall]
    Testers do not have adequate training, experience, and expertise in the system’s application domain.
  10. Adversarial Relationship (GEN-STF-10) [New Pitfall]
    A counterproductive adversarial relationship exists between the testers and either management, the developers, or both.
  11. Too Few Testers (GEN-STF-11) [New Pitfall]
    There are too few testers to perform all of the planned and needed testing.
  12. Allowing Developers to Close Defect Reports (GEN-STF-12) [New Pitfall]
    The developer assigned to fix a defect is allowed to close the defect report without tester concurrence.
  13. Testing Death March (GEN-STF-13) [New Pitfall]
    Testing is a death march that requires unsustainable overwork by the testers that ensures the failure of the testing program.
  14. All Testers Assumed Equal (GEN-STF-14) [New Pitfall]
    Project managers or other testing stakeholders with influence over staffing mistakenly assume that all testers are equal and interchangeable.

Process Pitfalls

  1. No Planned Testing Process (GEN-PRO-1) [New Pitfall]
    There is no real testing process because all testing (if any) is totally ad hoc and completely up to the whims of the individual developers.
  2. Essentially No Testing (GEN-PRO-2) [New Pitfall]
    Essentially no explicit developmental or operational testing is being performed. All testing is being implicitly performed by the users of the system or software. This pitfall is also known as “Test by User”.
  3. Inadequate Testing (GEN-PRO-3)
    The testers or developers fail to adequately test certain testable behaviors, characteristics, or components of the system or software under test.
  4. Testing Process Ignored (GEN-PRO-4) [New Pitfall]
    The testers, developers, or managers ignore the official documented as-planned test process.
  5. One-Size-Fits-All Testing (GEN-PRO-5)
    All testing is performed the same way, to the same level of rigor, regardless of its criticality.
  6. Sunny-Day Testing Only (GEN-PRO-6) [New Pitfall]
    Testing is largely or totally restricted to verifying whether the system or software under test does what it should under normal situations, but does not verify whether it properly handles rainy-day situations (that is, errors, faults, or failures).
  7. Testing and Engineering Processes Not Integrated (GEN-PRO-7)
    The testing process is not adequately integrated into the overall system engineering process, but is rather treated as a separate specialty engineering activity with only limited interfaces with the primary engineering activities.
  8. Inadequate Test Prioritization (GEN-PRO-8)
    Testing is not adequately prioritized (for example, all types of testing have the same priority).
  9. Test-Type Confusion (GEN-PRO-9)
    Test cases from one type of testing are redundantly repeated as part of another type of testing, even though the testing types have quite different purposes and scopes.
  10. Functionality Testing Overemphasized (GEN-PRO-10)
    There is an overemphasis on testing functionality as opposed to testing quality, data, and interface requirements and testing architectural, design, and implementation constraints.
  11. Black-Box System Testing Overemphasized (GEN-PRO-11)
    There is an overemphasis on black-box system testing for requirements conformance, and there is very little white-box unit and integration testing for the architecture, design, and implementation verification.
  12. Black-Box System Testing Underemphasized (GEN-PRO-12)
    There is an overemphasis on white-box unit and integration testing, and very little time is spent on black-box system testing to verify conformance to the requirements.
  13. Test Preconditions Ignored (GEN-PRO-13) [New Pitfall]
    Test cases do not address preconditions such as the system’s internal mode and states as well as the state(s) of the system’s external environment.
  14. Too Immature for Testing (GEN-PRO-14)
    Objects Under Test are delivered for testing when they are immature and not ready to be tested.
  15. Inadequate Test Data (GEN-PRO-15)
    The test data (including individual test data and sets of test data) lack adequate fidelity to operational data, is incomplete, or is invalid.
  16. Inadequate Evaluations of Test Assets (GEN-PRO-16)
    The quality of the test assets is not adequately evaluated prior to using them.
  17. Inadequate Maintenance of Test Assets (GEN-PRO-17)
    Test assets are not properly maintained (that is, adequately updated and iterated) as defects are found and the object under test (OUT) is changed.
  18. Testing as a Phase (GEN-PRO-18)
    Testing is treated as a phase that takes place late in a sequential (also known as waterfall) development cycle instead of as an ongoing activity that takes place continuously in an iterative, incremental, and concurrent (an evolutionary, or agile) development cycle.
  19. Testers Not Involved Early (GEN-PRO-19)
    Testers are not involved at the beginning of the project, but rather only once an implementation exists to test.
  20. Developmental Testing During Production (GEN-PRO-20) [New Pitfall]
    Significant system testing is postponed until the system is already in production when fixing defects is much more difficult and expensive.
  21. No Operational Testing (GEN-PRO-21)
    Representative users are not performing any operational testing of the “completed” system under actual operational conditions.
  22. Test Oracles Ignore Nondeterministic Behavior (GEN-PRO-22) [New Pitfall]
    Testers do not have any criteria for determining when a test has passed when non-deterministic behavior results in intermittent failures and faults.
  23. Ad Hoc Testing (GEN-PRO-22) [New Pitfall]
    Testers use few if any structured testing techniques so that testing is primarily or completely ad hoc.
  24. Testing in Quality (GEN-PRO-23) [New Pitfall]
    Testing stakeholders rely on testing quality into the system/software under test rather than building quality in from the beginning via all engineering and management activities.
  25. Developers Ignore Testability (GEN-PRO-24) [New Pitfall]
    The system or software under test (SUT) is unnecessarily difficult to test because the developers did not consider testing when designing and implementing the system or software.
  26. Failure to Address the BackBlob (GEN-PRO-25) [New Pitfall]
    Testers do not adequately deal with their increasing workload due to an ever in-creasing backlog of testing work including manual regression testing and the maintenance of automated tests.
  27. Test Assets Not Delivered (GEN-PRO-26) [New Pitfall]
    The system or software under test is delivered without its associated testing assets that would enable the receiving organization(s) to test new capabilities and per-form regression testing after changes.
  28. Failure to Analyze Why Defects Escaped Detection (GEN-PRO-27) [New Pitfall]
    The testers fail to analyze the defects that should have been uncovered by the testing that was performed but were not.
  29. Official Test Standards are Ignored (GEN-PRO-28) [New Pitfall]
    The testers and other testing stakeholders ignore all existing official test standards such as the international software testing standards ISO/IEC/IEEE 29119.
  30. Official Test Standards are Slavishly Followed (GEN-PRO-29) [New Pitfall]
    The testers fail to appropriately tailor one or more official test standards but rather slavishly comply with all of them.

Test Tools and Environments Pitfalls

  1. Over-Reliance on Testing Tools (GEN-TTE-1)
    Testers and other testing stakeholders place too much reliance on commercial off-the-shelf (COTS) and homegrown testing tools.
  2. Too Many Target Platforms (GEN-TTE-2)
    The test team and testers are not adequately prepared for testing applications that will execute on numerous target platforms (for example, hardware, operating system, and middleware).
  3. Target Platform Difficult to Access (GEN-TTE-3)
    The testers are not prepared to perform adequate testing when the target platform is not designed to enable access for testing.
  4. Inadequate Test Environments (GEN-TTE-4)
    There are insufficient test tools, test environments or test beds, and test laboratories or facilities, so adequate testing cannot be performed within the schedule and personnel limitations.
  5. Poor Fidelity of Test Environments (GEN-TTE-5)
    The testers build and use test environments or test beds that have poor fidelity to the operational environment of the system or software under test (SUT), and this causes inconclusive or incorrect test results (false-positive and false-negative test results).
  6. Inadequate Test Environment Quality (GEN-TTE-6)
    The quality of one or more test environments is inadequate due to an excessive number of defects.
  7. Test Environments Inadequately Tested (GEN-TTE-7) [New Pitfall]
    Testers do not test their test environments/beds to eliminate defects that could either prevent the testing of the system or software under test or cause incorrect test results.
  8. Inadequate Test Configuration Management (GEN-TTE-8)
    Testing work products (for example, test cases, test scripts, test data, test tools, and test environments) are not under configuration management (CM).
  9. Developers Ignore Testability (GEN-TTE-9)
    It is unnecessarily difficult to develop automated tests because the developers do not consider testing when designing and implementing their system or software.
  10. Test Assets Not Delivered (GEN-TTE-10) [Combination of Two Existing Pitfalls]
    The development organization delivers the system or software to its sustainment organization without the associated test assets needed to support the testing of new capabilities and the regression testing of changes.

Automated Testing Pitfalls [New Pitfall Category]

  1. Over-Reliance on Manual Testing (GEN-AUTO-1) [Moved from the Test Tools and Environments Category]
    Testers place too much reliance on manual testing so that an insufficient amount of testing is automated.
  2. Automated Testing Replaces Manual Testing (GEN-AUTO-2) [New Pitfall]
    Managers, developers, or testers attempt to replace all manual testing with automated testing.
  3. Excessive Number of Automated Tests (GEN-AUTO-3) [New Pitfall]
    The ratio of the amount of automated tests to the amount of deliverable software is too high.
  4. Inappropriate Distribution of Automated Tests (GEN-AUTO-4) [New Pitfall]
    The distribution of the amount of automated testing among the different levels of testing (such as unit testing, integration testing, system testing, and user interface testing) is inappropriate.
  5. Inadequate Automated Test Quality (GEN-AUTO-5) [New Pitfall]
    The automated tests have excessive numbers of defects.
  6. Automated Tests Excessively Complex (GEN-AUTO-6) [New Pitfall]
    The automated tests are significantly more complex than they need to be.
  7. Automated Tests Not Maintained (GEN-AUTO-7) [New Pitfall]
    The automated tests are not maintained so that they are no longer trusted or reusable.
  8. Insufficient Resources Invested (GEN-AUTO-8) [New Pitfall]
    Insufficient resources are allocated to plan for, develop, and maintain automated tests.
  9. Automation Tools Not Appropriate (GEN-AUTO-9) [New Pitfall]
    The developers and testers select inappropriate tools for supporting automated testing.
  10. Stakeholders Ignored (GEN-AUTO-10) [New Pitfall]
    The developers and testers ignore the stakeholders when planning and performing automated testing.

Regression Testing Pitfalls

  1. Inadequate Regression Test Automation (GEN-REG-1)
    Testers and developers have automated an insufficient number of tests to enable adequate regression testing).
  2. Regression Testing Not Performed (GEN-REG-2)
    Testers and maintainers perform insufficient regression testing to determine if new defects have been accidentally introduced when changes are made to the system).
  3. Inadequate Scope of Regression Testing (GEN-REG-3)
    The scope of regression testing is insufficiently broad).
  4. Only Low-Level Regression Tests (GEN-REG-4)
    Only low-level (for example, unit-level and possibly integration) regression tests are rerun, so there is no system, acceptance, or operational regression testing and no SoS regression testing).
  5. Test Resources Not Delivered for Maintenance (GEN-REG-5)
    The test resources produced by the development organization are not made available to the maintenance organization to support testing new capabilities and regression testing changes).
  6. Only Functional Regression Testing (GEN-REG-6)
    Testers and maintainers only perform regression testing to determine if changes introduce functionality-related defects.
  7. Inadequate Retesting of Reused Software (TTS-REG-7) [New Pitfall]
    Developers reuse software without adequately retesting it to ensure that it continues to operate correctly as part of the current system or application.

Test Communication Pitfalls

  1. Inadequate Source Documentation (GEN-COM-1) [Expanded in Scope and Renamed]
    Either requirements engineers, architects, and designers produce inadequate documentation (for example, models and documents) to support testing or such documentation is not provided to the testers.
  2. Inadequate Defect Reports (GEN-COM-2)
    Testers and others create defect reports (also known as bug and trouble reports) that are incomplete, contain incorrect information, or are difficult to read.
  3. Inadequate Test Documentation (GEN-COM-3)
    Testers create test documentation that is incomplete or contains incorrect information.
  4. Source Documents Not Maintained (GEN-COM-4)
    Developers do not properly maintain the requirements specifications, architecture documents, design documents, and associated models that are needed as inputs to the development of tests.
  5. Inadequate Communication Concerning Testing (GEN-COM-5)
    There is inadequate verbal and written communication concerning the testing among testers and other testing stakeholders.
  6. Inconsistent Testing Terminology (GEN-COM-6) [New Pitfall]
    Different testers, developers, managers, and other testing stakeholders often use inconsistent and ambiguous technical jargon so that the same word has different meanings and different words have the same meaning.

Requirements-Related Testing Pitfalls

  1. Tests as Requirements (GEN-REQ-1) [New Pitfall]
    Developers use black-box system- and subsystem-level tests as a replacement for the associated system and subsystem requirements.
  2. Ambiguous Requirements (GEN-REQ-2)
    Testers misinterpret a great many ambiguous requirements and therefore base their testing on incorrect interpretations of these requirements.
  3. Obsolete Requirements (GEN-REQ-3)
    Testers waste effort and time testing whether the system or software under test (SUT) correctly implements a great many obsolete requirements.
  4. Missing Requirements (GEN-REQ-4)
    Testers overlook many undocumented requirements and therefore do not plan for, develop, or run the associated overlooked test cases.
  5. Incomplete Requirements (GEN-REQ-5)
    Testers fail to detect that many requirements are incomplete; therefore, they develop and run correspondingly incomplete or incorrect test cases.
  6. Incorrect Requirements (GEN-REQ-6)
    Testers fail to detect that many requirements are incorrect, and therefore develop and run correspondingly incorrect test cases that produce false-positive and false-negative test results.
  7. Requirements Churn (GEN-REQ-7)
    Testers waste an excessive amount of time and effort developing and running test cases based on many requirements that are not sufficiently stable and that therefore change one or more times prior to delivery.
  8. Improperly Derived Requirements (GEN-REQ-8)
    Testers base their testing on improperly derived requirements, resulting in missing test cases, test cases at the wrong level of abstraction, or incorrect test cases based on cross cutting requirements that are allocated without modification to multiple architectural components.
  9. Verification Methods Not Properly Specified (GEN-REQ-9)
    Testers (or other developers) fail to properly specify the verification method(s) for each requirement, thereby causing requirements to be verified using unnecessarily inefficient or ineffective verification method(s).
  10. Lack of Requirements Trace (GEN-REQ-10)
    The testers do not trace the requirements to individual tests or test cases, thereby making it unnecessarily difficult to determine whether the tests are inadequate or excessive.
  11. Titanic Effect of Deferred Requirements (GEN-REQ-11) [New Pitfall]
    Managers or chief engineers repeatedly defer more and more requirements (as well as deferred residual defects and defect fixes) from the previous increment, block, or build to the current one after the resources for the current one have been allocated. This results in the "Titanic Effect" in which water (deferred requirements) flows from one watertight compartment (increment) over the bulkhead to the next so that the ship (project) floats lower and lower in the water until it eventually sinks (the project is cancelled). This continual deferral of requirements has a titanic effect on the amount of testing to perform and the resources needed to accomplish testing.

Test-Type-Specific Pitfalls

The following pitfalls are primarily restricted to a single type of testing:

Executable Model Testing Pitfalls [New Pitfall Category]

  1. Inadequate Executable Models (TTS-MOD-1) [New Pitfall]
    Either there are no executable requirements, architectural, or design models or else the models that exist are inadequate to enable associated test cases to be manually or automatically developed).
  2. Executable Models Not Tested (TTS-MOD-2) [New Pitfall]
    No one (such as testers, requirements engineers, architects, or designers) is testing executable requirements, architectural, or design models to verify whether they conform to the requirements and incorporate any defects.

Unit Testing Pitfalls

  1. Testing Does Not Drive Design and Implementation (TTS-UNT-1)
    Software developers and testers do not develop their tests first and then use these tests to drive development of the associated architecture, design, and implementation).
  2. Conflict of Interest (TTS-UNT-2)
    Nothing is done to address the following conflict of interest that exists when developers test their own work products: Essentially, they are being asked to demonstrate that their software is defective).

Integration Testing Pitfalls

  1. Integration Decreases Testability Ignored (TTS-INT-1)
    Testers fail to take into account that integration encapsulates the individual parts of the whole and the interactions between them, thereby making the internal parts of the integrated whole less observable and less controllable and, therefore, less testable).
  2. Inadequate Self-Testing (TTS-INT-2)
    Testers are unprepared to address the difficulty of testing encapsulated components due to a lack of system- or software-internal self-tests).
  3. Unavailable Components (TTS-INT-3)
    Integration testing must be postponed due to the unavailability of (1) system hardware or software components or (2) test environment components).
  4. System Testing as Integration Testing (TTS-INT-4)
    Testers are actually performing system-level tests of system functionality when they are supposed to be performing integration testing of component interfaces and interactions.

Specialty Engineering Testing Pitfalls

  1. Inadequate Capacity Testing (TTS-SPC-1)
    Testers perform little or no capacity testing (or the capacity testing they do perform is superficial) to determine the degree to which the system or software degrades gracefully as capacity limits are approached, reached, and exceeded).
  2. Inadequate Concurrency Testing (TTS-SPC-2)
    Testers perform little or no concurrency testing (or the concurrency testing they do perform is superficial) to explicitly uncover the defects that cause the common types of concurrency faults and failures: deadlock, livelock, starvation, priority inversion, race conditions, inconsistent views of shared memory, and unintentional infinite loops).
  3. Inadequate Internationalization Testing (TTS-SPC-3)
    Testers perform little or no internationalization testing (or the internationalization testing they do perform is superficial) to determine the degree to which the system is configurable to perform appropriately in multiple countries).
  4. Inadequate Interface Standards Conformance Testing (TTS-SPC-4) [New Pitfall]
    Testers perform little or no conformance testing of key interfaces to open interface stadards (or the conformance testing they do perform is superficial) to determine whether the system truly has an Open System Architecture (OSA).
  5. Inadequate Interoperability Testing (TTS-SPC-5)
    Testers perform little or no interoperability testing (or the interoperability testing they do perform is superficial) to determine the degree to which the system successfully interfaces and collaborates with other systems).
  6. Inadequate Performance Testing (TTS-SPC-6)
    Testers perform little or no performance testing (or the testing they do perform is only superficial) to determine the degree to which the system has adequate levels of the performance quality attributes: event schedualability, jitter, latency, response time, and throughput).
  7. Reliability Testing (TTS-SPC-7)
    Testers perform little or no long-duration reliability testing (also known as stability testing)—or the reliability testing they do perform is superficial (for example, it is not done under operational profiles and is not based on the results of any reliability models)—to determine the degree to which the system continues to function over time without failure).
  8. Inadequate Robustness Testing (TTS-SPC-8)
    Testers perform little or no robustness testing, or the robustness testing they do perform is superficial (for example, it is not based on the results of any robustness models), to determine the degree to which the system exhibits adequate error, fault, failure, and environmental tolerance).
  9. Inadequate Safety Testing (TTS-SPC-9)
    Testers perform little or no safety testing, or the safety testing they do perform is superficial (for example, it is not based on the results of a safety or hazard analysis), to determine the degree to which the system is safe from causing or suffering accidental harm).
  10. Inadequate Security Testing (TTS-SPC-10)
    Testers perform little or no security testing—or the security testing they do perform is superficial (for example, it is not based on the results of a security or threat analysis)—to determine the degree to which the system is secure from causing or suffering malicious harm).
  11. Inadequate Usability Testing (TTS-SPC-11)
    Testers or usability engineers perform little or no usability testing—or the usability testing they do perform is superficial—to determine the degree to which the system’s human-machine interfaces meet the system’s requirements for usability, manpower, personnel, training, human factors engineering (HFE), and habitability).

System Testing Pitfalls

  1. Test Hooks Remain (TTS-SYS-1)
    Testers fail to remove temporary test hooks after completing testing, so they remain in the delivered or fielded system).
  2. Lack of Test Hooks (TTS-SYS-2)
    Testers fail to take into account how a lack of test hooks makes it more difficult to test parts of the system hidden via information hiding).
  3. Inadequate End-to-End Testing (TTS-SYS-3)
    Testers perform inadequate system-level functional testing of a system’s end-to-end support for its missions.

User Testing Pitfalls

  1. Inadequate User Involvement (TTS-UT-1) [New Pitfall]
    Too few users representing too few of the different types of users are involved in the performance of user testing and the evaluation of its results.
  2. Unprepared User Representatives (TTS-UT-2) [New Pitfall]
    The user representatives are not adequately prepared to effectively and efficiently perform user testing.
  3. User Testing Merely Repeats System Testing (TTS-UT-3) [New Pitfall]
    User testing is only a repetition of a subsystem of the existing system tests by representative users.
  4. User Testing is Mistaken for Acceptance Testing (TTS-UT-4) [New Pitfall]
    User testing, often referred as User Acceptance Testing (UAT), is frequently confused with system acceptance testing in spite of their very different goals and descriptions.
  5. Knowledgeable and Careful User (TTS-UT-5) [New Pitfall]
    Testers (and developers) mistakenly assume that the user will be careful and as knowlegeable as they are about how the system will work.

Acceptance Testing Pitfalls

  1. No Clear System Acceptance Criteria (TTS-AT-1) [New Pitfall]
    No clear, well-documented, and agreed-upon criteria exist for the acquisition/customer organization accepting delivery of (and paying for) the completed system from the development organization.

System of Systems (SoS) Testing Pitfalls

  1. Inadequate SoS Planning (TTS-SoS-1)
    Testers and SoS architects perform an inadequate amount of SoS test planning and fail to appropriately document their plans in SoS-level test planning documentation.
  2. Unclear SoS Testing Responsibilities (TTS-SoS-2)
    Managers or testers fail to clearly define and document the responsibilities for performing end-to-end SoS testing.
  3. Inadequate Resources for SoS Testing (TTS-SoS-3)
    Management fails to provide adequate resources for system of systems (SoS) testing.
  4. SoS Testing Not Properly Scheduled (TTS-SoS-4)
    System of systems testing is not properly scheduled and coordinated with the individual systems’ testing and delivery schedules.
  5. Inadequate SoS Requirements (TTS-SoS-5)
    Many SoS-level requirements are missing, are of poor quality, or are never officially approved or funded.
  6. Inadequate Support from Individual System Projects (TTS-SoS-6)
    Test support from individual system development or maintenance projects is inadequate to perform system of systems testing.
  7. Inadequate Defect Tracking Across Projects (TTS-SoS-7)
    Defect tracking across individual system development or maintenance projects is inadequate to support system of systems testing.
  8. Finger-Pointing (TTS-SoS-8)
    Different system development or maintenance projects assign the responsibility for finding and fixing SoS-level defects to other projects.
Subpages (39): View All
Comments