Home Page‎ > ‎

Common Testing Pitfalls

Topics:

Testing Pitfalls Book

My most recent technical book, Common System and Software Testing Pitfalls, documents 92 commonly occurring pitfalls organized into 14 different categories. With so many testing pitfalls, this taxonomy of testing pitfalls was quite comprehensive. However in spite of this, additional pitfalls and even seven new categories of pitfalls have been identified as I have continued my research into testing pitfalls and more testers have read this book, compared it to their personal experiences, and provided me with their input. Because significant additions must wait for the publication of a second edition, I have decided to provide these web pages where you can read about major additions and modifications to my taxonomy of pitfalls.

Click cover for more information from the publisher Common Testing Pitfalls and Ways to Prevent and Mitigate Them: Descriptions, Symptoms, Consequences, Causes, and Recommendations (SEI Series in Software Engineering), Donald G. Firesmith, Addison Wesley, 29 December 2013, pp. 256, ISBN: 978-0133748550.

To learn about these 92 original pitfalls, you can purchase the book from (and leave a review on):

New Pitfalls and Pitfall Categories

With the addition of new pitfalls, there are now 176 pitfalls divided into 24 categories. The following is intended to document the currently-identified common system and software testing pitfalls. The individual pitfalls without links are documented in detail in the book, whereas the pitfalls with links to pitfall-specific webpages are new draft pitfalls that will be included in the second edition of the book.

Note that because these webpages are currently undergoing major updates to match the book's up-to-date second edition manuscript, these webpages are incomplete. Some of the pitfalls on this webpage are missing, and the links to many missing webpages for new pitfalls are broken. I will be updating these webpages as rapidly as practical during the next couple of months.

Top of page

Technical Review of New Pitfalls and Pitfall Categories

Unlike those in the book, these new testing pitfalls and pitfall categories documented in this website have not yet undergone extensive technical and editorial review. They are therefore subject to change, and some of them are still incomplete. I am currently looking for reviewers to help me mature them so that they can be added to the second edition of the book. Please email any comments or recommended changes and additions to dgf(at)sei(dot)cmu(dot)edu, and I will consider them for publication both on this website and in future editions of the book.

Top of page

Categories of Testing Pitfalls

The testing pitfalls have been organized into the following categories and subcategories. Those marked in green with see book were published in the first edition of the book, whereas those marked in orange with new have been added since the book's publication, have not yet been adequately reviewed, and are therefore highly subject to change.

  1. General Pitfalls (GEN):
    1. Test Planning and Scheduling (TPS) [see book]
    2. Stakeholder Involvement and Commitment (SIC) [see book]
    3. Management (MGMT) [see book]
    4. Staffing (STF) [see book]
    5. Process (PRO) [see book]
    6. Test Design (TDES) [new]
    7. Pitfall-Related Pitfalls [new]
    8. Test Tools and Environments (TTE) [see book]
    9. Automated Testing (AUTO) [new]
    10. Communication (COM) [see book]
    11. Testing-as-a-Service (TAAS) [new]
    12. Requirements (REQ) [see book]
    13. Test Data (TDAT) [new]

  2. Test-Type-Specific Pitfalls (TTS):
    1. Executable Model (MOD) [new]
    2. Unit Testing (UNT) [see book]
    3. Integration Testing (INT) [see book]
    4. Specialty Engineering Testing (SPC) [see book]
    5. System Testing (SYS) [see book]
    6. User Testing (UT) [new]
    7. A/B Testing (ABT) [new]
    8. Acceptance Testing (AT) [new]
    9. Operational Testing (OT) [new]
    10. System of Systems Testing (SoS) [see book]
    11. Regression Testing (REG) [see book]
Top of page

General Testing Pitfalls (GEN)

These general testing pitfalls are not primarily specific to any single type of testing.

Test Planning and Scheduling Pitfalls (TPS)

  1. No Separate Test Planning Documentation (GEN-TPS-1) [see book]
    There is no separate testing-specific planning documentation, only incomplete, high-level overviews of testing in the general planning documents.
  2. Incomplete Test Planning (GEN-TPS-2) [see book]
    Test planning and its associated documentation are not sufficiently complete for the current point in the system development cycle.
  3. Test Plans Ignored (GEN-TPS-3) [see book]
    The test planning documentation is ignored (that is, it becomes “shelfware”) once it is developed and delivered.
  4. Test-Case Documents as Test Plans (GEN-TPS-4) [see book]
    Test-case documents that document specific test cases are mislabeled as test plans.
  5. Inadequate Test Schedule (GEN-TPS-5) [see book]
    The testing schedule is inadequate to complete proper testing.
  6. Testing at the End (GEN-TPS-6) [see book]
    All testing is performed late in the development cycle; there is little or no testing of executable models or unit or integration testing planned or performed during the early and middle stages of the development cycle.
  7. Independent Test Schedule (GEN-TPS-7) [new]
    The test schedule is developed independently of the project master schedule and the schedules of the other development activities.
Top of page

Stakeholder Involvement and Commitment Pitfalls (SIC)

  1. Wrong Testing Mindset (GEN-SIC-1) [see book]
    Some testers and testing stakeholders have one or more incorrect beliefs concerning testing.
  2. Unrealistic Testing Expectations (GEN-SIC-2) [see book]
    Testing stakeholders (especially customer representatives and managers) have various unrealistic expectations with regard to testing.
  3. Assuming Testing Only Verification Method Needed (GEN-SIC-3) [new]
    Testing stakeholders mistakenly believe that testing is always the best and only system or software method that is needed.
  4. Mistaking Demonstration for Testing (GEN-SIC-4) [new]
    Testing stakeholders mistakenly believe that demonstrations are a valid type of testing.
  5. Lack of Stakeholder Commitment to Testing (GEN-SIC-5) [see book]
    Stakeholder commitment to the testing effort is inadequate; sufficient resources (for example, people, time in the schedule, tools, or funding) are not allocated the testing effort.
Top of page

Management Pitfalls (MGMT)

  1. Inadequate Test Resources (GEN-MGMT-1) [see book]
    Management allocates inadequate resources (for example, budget, schedule, staffing, and facilities) to the testing effort.
  2. Inappropriate External Pressures (GEN-MGMT-2) [see book]
    Managers and others in positions of authority subject testers to inappropriate external pressures.
  3. Inadequate Test-Related Risk Management (GEN-MGMT-3) [see book]
    There are too few test-related risks identified in the project’s official risk repository, and those that are identified have inappropriately low probabilities, low harm severities, and low priorities.
  4. Inadequate Test Metrics (GEN-MGMT-4) [see book]
    Too few test metrics are produced, analyzed, reported, or acted upon, and some of the test metrics that are produced are inappropriate or not very useful.
  5. Inconvenient Test Results Ignored (GEN-MGMT-5) [see book]
    Management ignores or treats lightly inconvenient negative test results (especially those with negative ramifications for the schedule, budget, or system quality).
  6. Test Lessons Learned Ignored (GEN-MGMT-6) [see book]
    Lessons learned from testing on previous projects are ignored and not placed into practice on the current project.
  7. Inadequate Test-Related Configuration Management (GEN-MGMT-7) [moved and expanded]
    Test-related work products (for example, SUT, test cases, test scripts, test data, test tools, and test environments) are not under configuration management (CM) (that is, almost always).
Top of page

Staffing Pitfalls (STF)

  1. Lack of Independence (GEN-STF-1) [see book]
    The test organization or project test team lack adequate administrative, financial, and technical independence to enable them to withstand inappropriate pressure from the development management to cut corners.
  2. Unclear Testing Responsibilities (GEN-STF-2) [see book]
    The testing responsibilities are unclear and do not adequately address which organizations, teams, and people are going to be responsible for and perform the different types of testing.
  3. Developers Responsible for All Testing (GEN-STF-3) [see book]
    Developers are responsible for all of the developmental testing that occurs during system or software development.
  4. Testers Responsible for All Testing (GEN-STF-4) [see book]
    Testers are responsible for all of the developmental testing that occurs during system or software development.
  5. Testers Responsible for Ensuring Quality (GEN-STF-5) [new]
    Testers are (solely) responsible for the quality of the system or software under test.
  6. Testers Fix Defects (GEN-STF-6) [new]
    The testers debug (diagnose and fix) the defects they find in the object under test (OUT) instead of merely reporting them to the appropriate developer(s).
  7. Users Responsible for Testing (GEN-STF-7) [new]
    The users are responsible for most of the testing, which occurs after the system(s) are operational.
  8. Inadequate Testing Expertise (GEN-STF-8) [see book]
    Some testers, developers, or other testing stakeholders have inadequate testing-related understanding, expertise, experience, or training.
  9. Inadequate Domain Expertise (GEN-STF-9) [new]
    Testers do not have adequate training, experience, and expertise in the system’s application domain.
  10. Adversarial Relationship (GEN-STF-10) [new]
    A counterproductive adversarial relationship exists between the testers and either management, the developers, or both.
  11. Too Few Testers (GEN-STF-11) [new]
    There are too few testers to perform all of the planned and needed testing.
  12. Allowing Developers to Close Defect Reports (GEN-STF-12) [new]
    The developer assigned to fix a defect is allowed to close the defect report without tester concurrence.
  13. Testing Death March (GEN-STF-13) [new]
    Testing is a death march that requires unsustainable overwork by the testers that ensures the failure of the testing program.
  14. All Testers Assumed Equal (GEN-STF-14) [new]
    Project managers or other testing stakeholders with influence over staffing mistakenly assume that all testers are equal and interchangeable.
Top of page

Process Pitfalls (PRO)

  1. No Planned Testing Process (GEN-PRO-1) [new]
    There is no real testing process because all testing (if any) is totally ad hoc and completely up to the whims of the individual developers.
  2. Essentially No Testing (GEN-PRO-2) [new]
    Essentially no explicit developmental or operational testing is being performed. All testing is being implicitly performed by the users of the system or software. This pitfall is also known as “Test by User”.
  3. Inadequate Testing (GEN-PRO-3) [see book]
    The testers or developers fail to adequately test certain testable behaviors, characteristics, or components of the system or software under test.
  4. Testing Process Ignored (GEN-PRO-4) [new]
    The testers, developers, or managers ignore the official documented as-planned test process.
  5. One-Size-Fits-All Testing (GEN-PRO-5)
    All testing is performed the same way, to the same level of rigor, regardless of its criticality.
  6. Testing and Engineering Processes Not Integrated (GEN-PRO-6) [see book]
    The testing process is not adequately integrated into the overall system engineering process, but is rather treated as a separate specialty engineering activity with only limited interfaces with the primary engineering activities.
  7. Too Immature for Testing (GEN-PRO-7) [see book]
    Objects Under Test are delivered for testing when they are immature and not ready to be tested.
  8. Inadequate Evaluations of Test Assets (GEN-PRO-8) [see book]
    The quality of the test assets is not adequately evaluated prior to using them.
  9. Inadequate Maintenance of Test Assets (GEN-PRO-9) [see book]
    Test assets are not properly maintained (that is, adequately updated and iterated) as defects are found and the object under test (OUT) is changed.
  10. Test Assets Not Delivered (GEN-PRO-10) The system or software under test is delivered without its associated testing assets that would enable the receiving organization(s) to test new capabilities and perform regression testing after changes.
  11. Testing as a Phase (GEN-PRO-11) [see book]
    Testing is treated as a phase that takes place late in a sequential (also known as waterfall) development cycle instead of as an ongoing activity that takes place continuously in an iterative, incremental, and concurrent (an evolutionary, or agile) development cycle.
  12. Testers Not Involved Early (GEN-PRO-12) [see book]
    Testers are not involved at the beginning of the project, but rather only once an implementation exists to test.
  13. Developmental Testing During Production (GEN-PRO-13) [new]
    Significant system testing is postponed until the system is already in production when fixing defects is much more difficult and expensive.
  14. No Operational Testing (GEN-PRO-14) [see book]
    No one is performing any operational testing of the “completed” system under actual operational conditions.
  15. Testing in Quality (GEN-PRO-15) [new]
    Testing stakeholders rely on testing quality into the system/software under test rather than building quality in from the beginning via all engineering and management activities.
  16. Developers Ignore Testability (GEN-PRO-16) [new]
    The system or software under test (SUT) is unnecessarily difficult to test because the developers did not consider testing when designing and implementing the system or software.
  17. Failure to Address the BackBlob (GEN-PRO-17) [new]
    Testers do not adequately deal with their increasing workload due to an ever increasing backlog of testing work including manual regression testing and the maintenance of automated tests.
  18. Failure to Analyze Why Defects Escaped Detection (GEN-PRO-18) [new]
    The testers fail to analyze the defects that should have been uncovered by the testing that was performed but were not.
  19. Official Test Standards are Ignored (GEN-PRO-19) [new]
    The testers and other testing stakeholders ignore all existing official test standards such as the international software testing standards ISO/IEC/IEEE 29119.
  20. Official Test Standards are Slavishly Followed (GEN-PRO-20) [new]
    The testers fail to appropriately tailor one or more official test standards but rather slavishly comply with all of them.
  21. Developing New When Old Fails Tests (GEN-PRO-21) [new]
    The developers develop new software when existing software still fails one or more tests.
  22. Integrating New or Updates When Fails Tests (GEN-PRO-22) [new]
    The developers integrate either new software or updated existing software into the official codebase when it still fails one or more tests.
Top of page

Test Design Pitfalls (TDES) [new]

  1. Sunny-Day Testing Only (GEN-TDES-1) [new]
    Testing is largely or totally restricted to verifying whether the system or software under test does what it should under normal situations, but does not verify whether it properly handles rainy-day situations (that is, errors, faults, or failures).
  2. Inadequate Test Prioritization (GEN-TDES-2) [see book]
    Testing is not adequately prioritized (for example, all types of testing have the same priority).
  3. Test-Type Confusion (GEN-TDES-3) [see book]
    Test cases from one type of testing are redundantly repeated as part of another type of testing, even though the testing types have quite different purposes and scopes.
  4. Functionality Testing Overemphasized (GEN-TDES-4) [see book]
    There is an overemphasis on testing functionality as opposed to testing quality, data, and interface requirements and testing architectural, design, and implementation constraints.
  5. System Testing Overemphasized (GEN-TDES-5) [see book]
    There is an overemphasis on black-box system testing for requirements conformance, and there is very little white-box unit and integration testing for the architecture, design, and implementation verification.
  6. System Testing Underemphasized (GEN-TDES-6) [see book]
    There is an overemphasis on white-box unit and integration testing, and very little time is spent on black-box system testing to verify conformance to the requirements.
  7. Test Preconditions Ignored (GEN-TDES-7) [new]
    Test cases do not address preconditions such as the system’s internal mode and states as well as the state(s) of the system’s external environment.
  8. Test Oracles Ignore Nondeterministic Behavior (GEN-TDES-8) [new]
    Testers do not have any criteria for determining when a test has passed when non-deterministic behavior results in intermittent failures and faults.
Top of page

Pitfall-Related Pitfalls (PRP)

  1. Overly Ambitious Process Improvement (GEN-PRP-1) [new]
    Management or the test team is overly ambiguous with regard to improving the testing process by attempting to address too many relevant testing pitfalls at once.
  2. Inadequate Pitfall Prioritization (GEN-PRP-2) [new]
    Test managers do not adequately prioritize the testing pitfalls (for example by relevance, frequency, severity of negative consequences, and/or risk) when attempting to improve the testing process by better addressing testing pitfalls.
Top of page

Test Tools and Environments Pitfalls (TTE)

  1. Over-Reliance on Testing Tools (GEN-TTE-1) [see book]
    Testers and other testing stakeholders place too much reliance on commercial off-the-shelf (COTS) and homegrown testing tools.
  2. Poor Preparation for Numerous Platforms (GEN-TTE-2) [see book]
    The test team and testers are not adequately prepared for testing applications that will execute on numerous target platforms (for example, hardware, operating system, and middleware).
  3. Target Platform Difficult to Access (GEN-TTE-3) [see book]
    The testers are not prepared to perform adequate testing when the target platform is not designed to enable access for testing.
  4. Inadequate Test Environments (GEN-TTE-4) [see book]
    There are insufficient test tools, test environments or test beds, and test laboratories or facilities, so adequate testing cannot be performed within the schedule and personnel limitations.
  5. Poor Fidelity of Test Environments (GEN-TTE-5) [see book]
    The testers build and use test environments or test beds that have poor fidelity to the operational environment of the system or software under test (SUT), and this causes inconclusive or incorrect test results (false-positive and false-negative test results).
  6. Inadequate Test Environment Quality (GEN-TTE-6) [see book]
    The quality of one or more test environments is inadequate due to an excessive number of defects.
  7. Test Environments Inadequately Tested (GEN-TTE-7) [new]
    Testers do not test their test environments/beds to eliminate defects that could either prevent the testing of the system or software under test or cause incorrect test results.
  8. Inadequate Testing in a Staging Environment (GEN-TTE-8) [new]
    There is little or no testing of the updated software in a staging environment that matches the operational (production) environment prior to switchover and being placed into actual operation.
  9. Insecure Test Environment (GEN-TTE-9) [new]
    A test environment has weak security, thereby allowing access to new software, software patches, and other connected domains.
  10. Improperly Configured Test Environment (GEN-TTE-11) [new]
    A test environment has the wrong configuration at the start of testing.
Top of page

Automated Testing Pitfalls (AUTO) [new]

  1. Automated Testing Not Treated As Project (GEN-AUTO-1) [new]
    Test automation is not treated as a project with its own need for sufficient re-sources and management oversight.
  2. Insufficient Automated Testing (GEN-AUTO-2) [moved from the test tools and environments category]
    Testers place too much reliance on manual testing so that an insufficient amount of testing is automated.
  3. Automated Testing Replaces Manual Testing (GEN-AUTO-3) [new]
    Managers, developers, or testers attempt to replace all manual testing with automated testing.
  4. Automated Testing Replaces Testers (GEN-AUTO-4) [new]
    Managers have the mistaken belief that test automation eliminates the need for some testers.
  5. Inappropriate Distribution of Automated Tests (GEN-AUTO-5) [new]
    The distribution of the amount of automated testing among the different levels of testing (such as unit testing, integration testing, system testing, and user interface testing) is inappropriate.
  6. Inadequate Automated Test Quality (GEN-AUTO-6) [new]
    The automated tests have excessive numbers of defects.
  7. Excessively Complex Automated Tests (GEN-AUTO-7) [new]
    The automated tests are significantly more complex than they need to be.
  8. Automated Tests Not Maintained (GEN-AUTO-8) [new]
    The automated tests are not maintained so that they are no longer trusted or reusable.
  9. Insufficient Resources Invested (GEN-AUTO-9) [new]
    Insufficient resources are allocated to plan for, develop, and maintain automated tests.
  10. Inappropriate Automation Tools (GEN-AUTO-10) [new]
    The developers and testers select inappropriate tools for supporting automated testing.
  11. Unclear Responsibilities for Automated Testing (GEN-AUTO-11) [new]
    It is unclear who is responsible for developing and maintaining the automated tests.
  12. Postponing Automated Testing Until Stable (GEN-AUTO-12) [new]
    Automated testing is postponed until the system/software under test (SUT) is “stable”.
  13. Automated Testing as Silver Bullet (GEN-AUTO-13) [new]
    Automated testing is treated as a silver bullet that will solve all testing problems.
  14. Incompatible Automation Tools (GEN-AUTO-14) [new]
    The test automation tools are incompatible with each other (such as test design, test execution, and test reporting tools) or with other tools (such as Continuous Integration (CI), Continuous Delivery (CD), and Continuous Change Management (CCM) tools).
  15. Testers Make Good Test Automation Engineers (GEN-AUTO-15) [new]
    It is mistakenly believed that typical testers will be good test automation engineers.
Top of page

Test Communication Pitfalls (COM)

  1. Inadequate Source Documentation (GEN-COM-1) [expanded in scope and renamed]
    Requirements engineers, architects, and designers produce inadequate documentation (for example, models and documents) to support testing or such documentation is not provided to the testers.
  2. Inadequate Discrepancy Reports (GEN-COM-2) [see book]
    Testers and others create discrepancy reports (also known as bug, defect, and trouble reports) that are incomplete, contain incorrect information, or are difficult to read.
  3. Inadequate Test Documentation (GEN-COM-3) [see book]
    Testers create test documentation that is incomplete or contains incorrect information.
  4. Source Documents Not Maintained (GEN-COM-4) [see book]
    Developers do not properly maintain the requirements specifications, architecture documents, design documents, and associated models that are needed as inputs to the development of tests.
  5. Inadequate Communication Concerning Testing (GEN-COM-5) [see book]
    There is inadequate verbal and written communication concerning the testing among testers and other testing stakeholders.
  6. Inconsistent Testing Terminology (GEN-COM-6) [new]
    Different testers, developers, managers, and other testing stakeholders often use inconsistent and ambiguous technical jargon so that the same word has different meanings and different words have the same meaning.
  7. Excessive Test Documentation (GEN-COM-7) [new]
    Too much test documentation is being produced and maintained.
Top of page

Testing-as-a-Service Pitfalls (TaaS)

  1. Cost-Driven Provider Selection (GEN-TaaS-1) [new]
    Executive or administrative management selects the TaaS provider based solely on minimizing cost.
  2. Inadequate Oversight (GEN-TaaS-2) [new]
    Project management does not provide adequate oversight of the TaaS provider’s testing effort.
  3. Lack of Outsourcing Expertise (GEN-TaaS-3) [new]
    Project administrative and technical management has insufficient training, expertise, and experience in the outsourcing, especially of testing as a service.
  4. Inadequate TaaS Contract (GEN-TaaS-4) [new]
    The contract between the development/maintenance organization and the TaaS contractor/vendor does not adequately address the project’s Key Performance Indicators (KPIs), associated Service Level Agreements (SLAs), and the specific metrics by which achievement of the SLAs will be measured.
  5. TaaS Improperly Chosen (GEN-TaaS-5) [new]
    TaaS is selected for the outsourcing of a type of testing for which it is an inappropriate choice.
Top of page

Requirements-Related Pitfalls (REQ)

  1. Tests as Requirements (GEN-REQ-1) [new]
    Developers use black-box system- and subsystem-level tests as a replacement for the associated system and subsystem requirements.
  2. Ambiguous Requirements (GEN-REQ-2) [see book]
    Testers misinterpret a great many ambiguous requirements and therefore base their testing on incorrect interpretations of these requirements.
  3. Obsolete Requirements (GEN-REQ-3) [see book]
    Testers waste effort and time testing whether the system or software under test (SUT) correctly implements a great many obsolete requirements.
  4. Missing Requirements (GEN-REQ-4) [see book]
    Testers overlook many undocumented requirements and therefore do not plan for, develop, or run the associated overlooked test cases.
  5. Incomplete Requirements (GEN-REQ-5) [see book]
    Testers fail to detect that many requirements are incomplete; therefore, they develop and run correspondingly incomplete or incorrect test cases.
  6. Incorrect Requirements (GEN-REQ-6) [see book]
    Testers fail to detect that many requirements are incorrect, and therefore develop and run correspondingly incorrect test cases that produce false-positive and false-negative test results.
  7. Requirements Churn (GEN-REQ-7) [see book]
    Testers waste an excessive amount of time and effort developing and running test cases based on many requirements that are not sufficiently stable and that therefore change one or more times prior to delivery.
  8. Improperly Derived Requirements (GEN-REQ-8) [see book]
    Testers base their testing on improperly derived requirements, resulting in missing test cases, test cases at the wrong level of abstraction, or incorrect test cases based on cross cutting requirements that are allocated without modification to multiple architectural components.
  9. Verification Methods Not Adequately Specified (GEN-REQ-9) [see book]
    Testers (or other developers) fail to adequately specify the verification method(s) for each requirement, thereby causing requirements to be verified using unnecessarily inefficient or ineffective verification method(s).
  10. Lack of Requirements Trace (GEN-REQ-10) [see book]
    The testers do not trace the requirements to individual tests or test cases, thereby making it unnecessarily difficult to determine whether the tests are inadequate or excessive.
  11. Deferred Requirements and the Titanic Effect (GEN-REQ-11) [new]
    Managers or chief engineers repeatedly defer more and more requirements (as well as deferred residual defects and defect fixes) from the previous increment, block, or build to the current one after the resources for the current one have been allocated. This results in the "Titanic Effect" in which water (deferred requirements) flows from one watertight compartment (increment) over the bulkhead to the next so that the ship (project) floats lower and lower in the water until it eventually sinks (the project is cancelled). This continual deferral of requirements has a titanic effect on the amount of testing to perform and the resources needed to accomplish testing.
  12. Implicit Requirements Ignored (GEN-REQ-12) [new]
    Testers ignore implicit requirements and only test for conformance with the explicitly specified requirements.
Top of page

Test Data Pitfalls (TDAT) [new]

  1. Inadequate Test Data (GEN-TDAT-1) The test data (including individual test data and sets of test data) lack adequate fidelity to operational data, are incomplete, or are invalid.
  2. Production Data in Test Data (GEN-TDAT-2) [new]
    Actual production (operational) data is used in or as the test data.
  3. Test Data in Production Data (GEN-TDAT-3) [new]
    Test data is used in or as the actual production (operational) data.
Top of page

Test-Type-Specific Pitfalls (TTS)

The following pitfalls are primarily restricted to a single type of testing:

Executable Model Pitfalls (MOD) [new]

  1. Inadequate Executable Models (TTS-MOD-1) [new]
    Either there are no executable requirements, architectural, or design models or else the models that exist are inadequate to enable associated test cases to be manually or automatically developed).
  2. Executable Models Not Tested (TTS-MOD-2) [new]
    No one (such as testers, requirements engineers, architects, or designers) is testing executable requirements, architectural, or design models to verify whether they conform to the requirements and incorporate any defects.
Top of page

Unit Testing Pitfalls (UNT)

  1. Testing Does Not Drive Design and Implementation (TTS-UNT-1) [see book]
    Software developers and testers do not develop their tests first and then use these tests to drive development of the associated architecture, design, and implementation).
  2. Conflict of Interest (TTS-UNT-2) [see book]
    Nothing is done to address the following conflict of interest that exists when developers test their own work products: Essentially, they are being asked to demonstrate that their software is defective).
  3. Untestable Units (TTS-UNT-3) [new]
    The unnecessarily large size and complexity of one or more software units under test (UUTs) makes them essentially untestable.
  4. Brittle Test Cases (TTS-UNT-4) [new]
    Unit test cases are too brittle and unnecessarily need to be changed when the unit under test changes.
  5. No Unit Testing (TTS-UNT-5) [new]
    Unit testing is replaced by higher-level testing such as integration testing and system testing.
  6. Unit Testing of Automatically-Generated Units (TTS-UNT-6) [new]
    Units that are automatically generated from a design model are unit tested, even though the design has already been verified.
Top of page

Integration Testing Pitfalls (INT)

  1. Integration Decreases Testability Ignored (TTS-INT-1) [see book]
    Testers fail to take into account that integration encapsulates the individual parts of the whole and the interactions between them, thereby making the internal parts of the integrated whole less observable and less controllable and, therefore, less testable).
  2. Inadequate Self-Testing (TTS-INT-2) [see book]
    Testers are unprepared to address the difficulty of testing encapsulated components due to a lack of system- or software-internal self-tests).
  3. Unavailable Components (TTS-INT-3) [see book]
    Integration testing must be postponed due to the unavailability of (1) system hardware or software components or (2) test environment components).
  4. System Testing as Integration Testing (TTS-INT-4) [see book]
    Testers are actually performing system-level tests of system functionality when they are supposed to be performing integration testing of component interfaces and interactions.
Top of page

Specialty Engineering Testing Pitfalls (SPC)

  1. Inadequate Capacity Testing (TTS-SPC-1) [see book]
    Testers perform little or no capacity testing (or the capacity testing they do perform is superficial) to determine the degree to which the system or software degrades gracefully as capacity limits are approached, reached, and exceeded).
  2. Inadequate Concurrency Testing (TTS-SPC-2) [see book]
    Testers perform little or no concurrency testing (or the concurrency testing they do perform is superficial) to explicitly uncover the defects that cause the common types of concurrency faults and failures: deadlock, livelock, starvation, priority inversion, race conditions, inconsistent views of shared memory, and unintentional infinite loops).
  3. Inadequate Configurability Testing (TTS-SPC-3) [new]
    Testers perform little or no configurability testing to determine whether the SUT and its components can be and have been properly configured.
  4. Inadequate Interface Standards Conformance Testing (TTS-SPC-4) [new]
    Testers perform little or no conformance testing of key interfaces to open interface standards (or the conformance testing they do perform is superficial) to determine whether the system truly has an Open System Architecture (OSA).
  5. Inadequate Internationalization Testing (TTS-SPC-5) [see book]
    Testers perform little or no internationalization testing (or the internationalization testing they do perform is superficial) to determine the degree to which the system is configurable to perform appropriately in multiple countries).
  6. Inadequate Interoperability Testing (TTS-SPC-6) [see book]
    Testers perform little or no interoperability testing (or the interoperability testing they do perform is superficial) to determine the degree to which the system successfully interfaces and collaborates with other systems).
  7. Inadequate Performance Testing (TTS-SPC-7) [see book]
    Testers perform little or no performance testing (or the testing they do perform is only superficial) to determine the degree to which the system has adequate levels of the performance quality attributes: event schedualability, jitter, latency, response time, and throughput).
  8. Inadequate Portability Testing (TTS-SPC-8) [new]
    Testers perform little or no portability testing (also known as configuration test-ing) to determine the degree to which the software under test behaves correctly when executing on different target platforms (that is, hardware, operating systems, and middleware).
  9. Inadequate Reliability Testing (TTS-SPC-9) [see book]
    Testers perform little or no long-duration reliability testing (also known as stability testing)—or the reliability testing they do perform is superficial (for example, it is not done under operational profiles and is not based on the results of any reliability models)—to determine the degree to which the system continues to function over time without failure).
  10. Inadequate Robustness Testing (TTS-SPC-10) [see book]
    Testers perform little or no robustness testing, or the robustness testing they do perform is superficial (for example, it is not based on the results of any robustness models), to determine the degree to which the system exhibits adequate error, fault, failure, and environmental tolerance).
  11. Inadequate Safety Testing (TTS-SPC-11) [see book]
    Testers perform little or no safety testing, or the safety testing they do perform is superficial (for example, it is not based on the results of a safety or hazard analysis), to determine the degree to which the system is safe from causing or suffering accidental harm).
  12. Inadequate Security Testing (TTS-SPC-12) [see book]
    Testers perform little or no security testing—or the security testing they do perform is superficial (for example, it is not based on the results of a security or threat analysis)—to determine the degree to which the system is secure from causing or suffering malicious harm).
  13. Inadequate Usability Testing (TTS-SPC-13) [see book]
    Testers or usability engineers perform little or no usability testing—or the usability testing they do perform is superficial—to determine the degree to which the system’s human-machine interfaces meet the system’s requirements for usability, manpower, personnel, training, human factors engineering (HFE), and habitability).
Top of page

System Testing Pitfalls (SYS)

  1. Test Hooks Remain (TTS-SYS-1) [see book]
    Testers fail to remove temporary test hooks after completing testing, so they remain in the delivered or fielded system).
  2. Lack of Test Hooks (TTS-SYS-2) [see book]
    Testers fail to take into account how a lack of test hooks makes it more difficult to test parts of the system hidden via information hiding).
  3. Inadequate End-to-End Testing (TTS-SYS-3) [see book]
    Testers perform inadequate system-level functional testing of a system’s end-to-end support for its missions.
Top of page

User Testing Pitfalls (UT) [new]

  1. Inadequate User Involvement (TTS-UT-1) [new]
    Too few users representing too few of the different types of users are involved in the performance of user testing and the evaluation of its results.
  2. Unprepared User Representatives (TTS-UT-2) [new]
    The user representatives are not adequately prepared to effectively and efficiently perform user testing.
  3. User Testing Merely Repeats System Testing (TTS-UT-3) [new]
    User testing is only a repetition of a subsystem of the existing system tests by representative users.
  4. User Testing is Mistaken for Acceptance Testing (TTS-UT-4) [new]
    User testing, often referred as User Acceptance Testing (UAT), is frequently confused with system acceptance testing in spite of their very different goals and descriptions.
  5. Assuming Knowledgeable and Careful Users (TTS-UT-5) [new]
    Testers (and developers) mistakenly assume that the user will be careful and as knowledgeable as they are about how the system will work.
  6. User Testing Too Late to Fix Defects (TTS-UT-6) [new]
    User testing occurs so late during development (for example, immediately prior to release) that it is too late to fix the defects that were uncovered.
Top of page

A/B Testing Pitfalls [new]

  1. Poor Key Performance Indicators (TTS-ABT-1) [new]
    The key performance indicators (KPIs) of the testing do not support business or mission goals.
  2. Misuse of Probability and Statistics (TTS-ABT-2) [new]
    Probability and statistics are misused when interpreting the results of A/B testing.
  3. Confusing Statistical Significance for Business Significance (TTS-ABT-3) [new]
    All statistically significant test results are mistakenly assumed to be sufficiently significant to justify choosing one variant over another, even if the benefits do not justify the associated costs.
  4. Error Source(s) not Controlled (TTS-ABT-4) [new]
    Various sources of error are not controlled during the testing.
  5. System Variant(s) Changed During Test (TTS-ABT-5) [new]
    One or both of the variants of the system or software under test are changed dur-ing an A/B test.
Top of page

Acceptance Testing Pitfalls (AT)

  1. No Clear System Acceptance Criteria (TTS-AT-1) [new]
    No clear, well-documented, and agreed-upon criteria exist for the acquisition/customer organization accepting delivery of (and paying for) the completed system from the development organization.
  2. Acceptance Testing Only Tests Functionality (TTS-AT-2) [new]
    Acceptance testing only tests the functionality of the system/software under test and does not test the SUT’s acceptability in terms of its quality characteristics and the constraints it must meet.
  3. Developers Determine Acceptance Tests (TTS-AT-3) [new]
    The developers rather than the acquirers or users define the acceptance tests and thereby the criteria by which the SUT is acceptable.
Top of page

Operational Testing Pitfalls (OT)

  1. No On-Site SW Developers (TTS-OT-1) [new]
    No SW developers are on-site during operational testing that requires scares or expensive dedicated resources to rapidly debug defects that prevent further testing.
  2. Inadequate Operational Testing (TTS-OT-2) [new]
    There is little or no further testing after acceptance testing of the SUT when it is oper-ating in its operational environment.
Top of page

System of Systems Testing Pitfalls (SoS)

  1. Inadequate SoS Planning (TTS-SoS-1) [see book]
    Testers and SoS architects perform an inadequate amount of SoS test planning and fail to appropriately document their plans in SoS-level test planning documentation.
  2. Unclear SoS Testing Responsibilities (TTS-SoS-2) [see book]
    Managers or testers fail to clearly define and document the responsibilities for performing end-to-end SoS testing.
  3. Inadequate Resources for SoS Testing (TTS-SoS-3) [see book]
    Management fails to provide adequate resources for system of systems (SoS) testing.
  4. SoS Testing Not Properly Scheduled (TTS-SoS-4) [see book]
    System of systems testing is not properly scheduled and coordinated with the individual systems’ testing and delivery schedules.
  5. Inadequate SoS Requirements (TTS-SoS-5) [see book]
    Many SoS-level requirements are missing, are of poor quality, or are never officially approved or funded.
  6. Inadequate Support from Individual System Projects (TTS-SoS-6) [see book]
    Test support from individual system development or maintenance projects is inadequate to perform system of systems testing.
  7. Inadequate Defect Tracking Across Projects (TTS-SoS-7) [see book]
    Defect tracking across individual system development or maintenance projects is inadequate to support system of systems testing.
  8. Finger-Pointing (TTS-SoS-8) [see book]
    Different system development or maintenance projects assign the responsibility for finding and fixing SoS-level defects to other projects.
Top of page

Regression Testing Pitfalls (REG)

  1. Inadequate Regression Test Automation (TTS-REG-1) [see book]
    Testers and developers have automated an insufficient number of tests to enable adequate regression testing).
  2. Regression Testing Not Performed (TTS-REG-2) [see book]
    Testers and maintainers perform insufficient regression testing to determine if new defects have been accidentally introduced when changes are made to the system).
  3. Inadequate Scope of Regression Testing (TTS-REG-3) [see book]
    The scope of regression testing is insufficiently broad).
  4. Only Low-Level Regression Tests (TTS-REG-4) [see book]
    Only low-level (for example, unit-level and possibly integration) regression tests are rerun, so there is no system, acceptance, or operational regression testing and no SoS regression testing).
  5. Test Resources Not Delivered for Maintenance (TTS-REG-5) [see book]
    The test resources produced by the development organization are not made available to the maintenance organization to support testing new capabilities and regression testing changes).
  6. Only Functional Regression Testing (TTS-REG-6) [see book]
    Testers and maintainers only perform regression testing to determine if changes introduce functionality-related defects.
  7. Inadequate Retesting of Reused Software (TTS-REG-7) [new]
    Developers reuse software without adequately retesting it to ensure that it continues to operate correctly as part of the current system or application.
Subpages (42): View All
Comments