Digital design verification is the process of testing and validating the correctness and functionality of a digital design or system before it is released or deployed. It is an essential step in the development process of digital systems and is crucial in ensuring that the system meets the required specifications and performance standards.
Cost of bugs over time:Â
Bugs found at the block level have little costÂ
Bugs found at the system level may effect at the time-to marketÂ
Bugs found after fabrication require an expressive re-spinÂ
Bugs found by customers can cost hundreds of millions and worst of all - reputationÂ
Verification is the most important aspect of the product development process. It consuming as much as 80% of the total product development timeÂ
How do you know when you are done?Â
How do you know that your specification is complete?Â
How do you verify the verifier?Â
The goal of digital design verification is to identify and eliminate any design errors or bugs, and to ensure that the system performs as expected under different conditions and use cases. The process involves creating a verification environment that can simulate various scenarios and test the system's behavior under different conditions.
Design verification is one of the most time-consuming tasks in a project life cycle because it involves ensuring that the design meets all the required specifications and performs as expected.
Here are a few reasons why design verification can be time-consuming:
Increasing design complexity: As the complexity of the design increases, the number of possible scenarios that need to be verified also increases. This means that the number of test cases and simulations required to verify the design also increases, making design verification a time-consuming task.
Iterative process: Design verification is an iterative process that involves running simulations, analyzing results, and fixing design errors. This process may need to be repeated multiple times until the design meets all the required specifications and performs as expected. This iterative process can be time-consuming, especially for complex designs with many interactions between different modules.
Development of verification environment: The development of a verification environment is a time-consuming task that involves creating testbenches, developing test cases, and running simulations. The verification environment must be comprehensive and cover all possible scenarios, which can be a time-consuming task.
Time-sensitive nature of design verification: Design verification is a time-sensitive task because it must be completed before the design is fabricated. Any design errors or bugs that are not identified during the verification process can result in costly rework or delays in the project schedule.
To provide a rough estimate, the industry rule of thumb is that verification can take up to 70-80% of the total design time. However, this can vary widely depending on the design complexity, verification methodology used, and the expertise and experience of the verification team.
Missing a hardware bug in verification can be costly in terms of time, money, and reputation. Here are a few potential costs of missing a hardware bug in verification:
Rework and delay: If a hardware bug is not identified during verification, it may not be discovered until later in the design cycle, or even after the design has been fabricated. Fixing hardware bugs after the design has been fabricated can be costly and time-consuming, and can result in delays to the project schedule.
Lost revenue: If a hardware bug is discovered after the design has been released to market, it can result in lost revenue and damage to the company's reputation. Customers may lose confidence in the product, leading to decreased sales and revenue.
Product recalls: In some cases, hardware bugs can be severe enough to require a product recall. This can be a costly and time-consuming process, and can result in significant damage to the company's reputation.
Legal liabilities: If a hardware bug results in harm or injury to users, the company may face legal liabilities and damages.
The process of reaching the verification goal starts with the definition of the verification goal.
The goal must reach 100% coverage of the code coverage and defined functional coverage spec in the verification plan. Â
Verification is an iterative process that continues until the desired level of confidence is achieved. There is no fixed rule for when to stop verification as it depends on various factors such as project requirements, schedule, budget, and risk tolerance. However, some common criteria for stopping verification are:
Achieving code coverage goals: The verification team can stop verification when all code coverage goals like branch, statement, expression, toggle and FSM have been met and the overall code coverage is at an acceptable level.
Achieving functional coverage goals: The verification team can stop verification when all functional coverage goals have been met and the functional coverage is at an acceptable level.
Meeting performance goals: Verification can be stopped when the design meets all performance goals, such as timing constraints and power consumption requirements.
Finding and fixing all critical bugs: Verification can be stopped when all critical bugs have been found and fixed, and no new critical bugs have been introduced for a certain period.
Available resources: Verification can be stopped if the available resources are not sufficient to continue, such as budget or time constraints.
Simulation is used extensively to verify the design (VLSI circuits) before fabrication.
To simulate a single design block, you need to create tests that generate stimuli from all the surrounding blocks. The benefit is that these low-level simulations run very fast. However, you may find bugs in both the design and testbench as the latter will have a great deal of code to provide stimuli from the missing blocks. As you start to integrate design blocks, they can stimulate each other, reducing your workload. These multiple block simulations may uncover more bugs, but they also run slower.Â
A testbench is HDL Code. It checks whether the RTL implementation meets the design spec or not.
The main purpose of a testbench is to supply input waveform to the design and to monitor its responseÂ
TransactionÂ
The transaction is a packet that is driven to the DUT or monitored by the monitor as a pin-level activity. In simple terms, the transaction is a class that holds a structure that is used to communicate with DUTÂ
GeneratorÂ
The generator creates or generates randomized transactions or stimuli and passes them to the driver.Â
DriverÂ
The driver interacts with DUT. It receives randomized transactions from the generator and drives them to the driven as a pin level activity.Â
MonitorÂ
The monitor observes pin-level activity on the connected interface at the input and output of the design. This pin level activity is converted into a transaction packet and sent to the scoreboard for checking purposes.Â
AgentÂ
An agent is a container that holds the generator, driver, and monitor. This is helpful to have a structured hierarchy based on the protocol or interface requirement.Â
ScoreboardÂ
The scoreboard receives the transaction packet from the monitor and compares it with the reference model. The reference module is written based on design specification understanding and design behavior.Â
EnvironmentÂ
An environment allows a well-mannered hierarchy and container for agents, scoreboards.Â
Testbench topÂ
The testbench top is a top-level component that includes interface and DUT instances. It connects design with the testbench.Â
TestÂ
The test is at the top of the hierarchy that initiates the environment component construction and connection between them. It is also responsible for the testbench configuration and stimulus generation process.Â
There are 2 types of Verification Methodology: Directed Testing and Constrainted-random Test
Directed Testing
A directed test is a type of test written with the purpose of verifying a specific function or feature. According to the traditional approach, when a design verification request is received, the verification engineer begins by listing all the tests based on the design specification and then writes individual tests for each case. Each test provides specific test values to the DUT.Â
The name “directed test” comes from the fact that each test focuses on verifying one or a set of specific features of the design. Therefore, in order for a design to be tested 100% or to achieve 100% coverage, the verification engineer must carefully consider and write all the required test cases. This is entirely achievable but requires a great deal of time and effort. However, from a management perspective, this method demonstrates steady progress and consistent results throughout the verification process.
Coverage is a metric that represents the percentage of the design that has been verified. When a design has been fully tested, the coverage value is 100%.
However, if the design increases in complexity—for example, doubling in size—then the testing time will also double or even increase further. At the same time, there may be periods during which the coverage level remains unchanged while using directed tests.
A faster method is therefore needed, but it must still achieve the goal of 100% coverage. That method is random testing.
Constrainted-random testÂ
While directed tests help us find bugs that we predict might occur in the design, random tests can uncover bugs that we did not anticipate.
As the name suggests, a random test is a testbench that automatically generates random values (a feature supported by SystemVerilog) to provide to the DUT. However, for standardized protocols such as APB, AHB, AXI, Avalon, etc., where signals must follow specified handshake rules, how can random testing be applied? In such cases, if pure random values were generated for control and data signals, proper verification would not be possible. Therefore, the random method is enhanced with constraints, which limit the randomness according to specific conditions. This feature is also supported by SystemVerilog. Such tests are called constrained-random tests.
As can be seen, with random tests, both the sample values driven into the DUT and the order in which they are applied can be randomized automatically, with constraints added if necessary. As a result, verification progress can be very fast at the early stage of test execution. However, developing this type of test often takes longer than directed tests, since additional monitoring models and checkers are required to validate the random values.
Towards the end of the simulation phase with random testing, the scope of verification for the design becomes narrower. In some cases, many additional constraints may need to be applied to the random tests, or directed tests may be used instead to verify these cases. Such cases are referred to as corner cases.
In summary, while directed tests generate the exact values required for verification, random tests can produce a wide variety of values. As a result, random tests may overlap, with two or more tests generating the same scenario. This is not considered a serious issue. However, if random tests generate invalid values, additional constraints must be introduced to restrict the randomization.
The procedure for checking coverage in directed tests and random tests differs as follows:
Directed test: Run the test → check coverage → identify uncovered points (not yet verified) → modify the test to create a new one → repeat from the beginning.
Random test: Run the test multiple times with different random value sequences → check coverage → identify uncovered points (not yet verified) → modify the test to create a new one (if needed) → add constraints → repeat from the beginning.
A verification plan is a comprehensive document that outlines the entire verification process for a particular design or system. It specifies the verification objectives, the verification environment, the verification strategy, the methodology to be used, the metrics to be collected, and the criteria for completion.
The verification plan also defines the verification tasks to be performed and their priorities, the tools to be used, the schedules and milestones, and the resources required. A verification plan serves as a guide for the verification team and helps ensure that the verification process is complete, consistent, and effective.
Content of Verification Plan
A verification plan is typically documented in a spreadsheet or a document that outlines the verification goals, objectives, methodologies, and strategies for verifying a digital design. The document typically includes the following sections:
Overview: A brief overview of the digital design being verified and the verification objectives.
Scope and Goals: A description of the scope of the verification, the goals to be achieved, and the verification objectives.
Methodology: A description of the verification methodology to be used, including the tools and techniques to be used, such as simulation, formal verification, emulation, and hardware acceleration.
Testbench Architecture: A description of the testbench architecture, including the interfaces and test sequences to be used.
Test Cases: A detailed list of test cases that will be executed, including the test case description, the expected results, and the pass/fail criteria.
Coverage Metrics: A description of the coverage metrics to be used, including functional coverage, code coverage, and assertion coverage.
Sign-off Criteria: A description of the criteria for sign-off, including the minimum coverage requirements and the criteria for verifying the design.
Such a plan is created early in the design process to also identify the effort and resources required to execute the plan. These efforts are documented in weeks or months and fit according to project schedule. No plan is perfect, and hence many project schedules allow a certain level of timeline slip.
Plan Review
Verfication plan reviews are usually held between peers and design faculty in the team to fill the gap in understanding project requirements and implementation details. There may be multiple revisions of the verification plan until a consensus is reached amongst all involved members. Allocation of time for planning is crucial in order to think through project requirements, get clarifications from different teams, understand risks and prioritize tasks to avoid a respin of the chip.
https://nguyenquanicd.blogspot.com/2017/08/verification-tong-quan-ve-cong-viec.html
Book System Verilog for Verification by CHRIS SPEARÂ
Book Writing Testbenches Functional Verification of HDL Models by Janick Bergeron