FAQ

This page will post commonly asked questions that the contest organizers have received (paraphrased). We will update the contest documents if necessary. The authors of all questions will be anonymous.

Question: In contest_education.pdf, what is the difference between Equations 27 and 28? Don't they yield the same amount of credit?

Response:

The difference is that for setup tests, the credit is defined as the delays across the common path, but for hold tests, it is sufficient to take the arrival times at the common point.

In general, the arrival time AT at the common point cp is the same as the arrival time at the input clock CLK + the delays up until cp.

For example, in late mode, ATL(cp) = ATL(CLK) + SUM(late delays). Similarly, in early mode, ATE(cp) = ATE(CLK) + SUM(early delays).

When we apply Equation 27 -- ATL(cp) - ATE(cp) -- to calculate the hold test credit, we have ATL(CLK) - ATE(CLK) + [SUM(late delays) - SUM(early delays)].

Notice that [SUM(late delays) - SUM(early delays)] is equivalent to Equation 28, which is the definition of setup credit.

From these equations, if we set the early and late clock arrival times to be the same, then the test credits for both hold and setup will be the same. However, in practice, if they are different, hold tests will typically get more credit

Question: What does one clock source mean?

Response:

For any benchmark, there will only be one primary input that will be generating a clock signal. In a global sense, the design only has one clock domain.

Question: Are FFs always initiated by "DFFR"_xxx in the input file? If not, is there any specific indication to identify the FF?

Response:

This is not always guaranteed. The best way to identify flip-flops is by the setup and hold tests.

Question: What is the difference between -numPaths and -numTests?

Response:

-numTests refers to the number of tests to be printed, while -numPaths refers to the number of paths per test to be printed. As clarification, -numPaths and -numTests are independent of each other -- setting one value should not affect the other. Please also note that specifying -numPaths should not indicate the number of paths to consider. That is, if there are 5 data paths feeding one test, but the setting is -numPaths 2, then only the two most critical paths should be printed out of the potential 5 paths you have considered.

By default, these values are set to infinity, i.e., print all available tests and paths. As an example:

-numPaths 2 -numTests 3 -test setup should yield:

<setup test 1>

<path 1 for test 1>

<path 2 for test 1>

<setup test 2>

<path 1 for test 2>

<path 2 for test 2>

<setup test 3>

<path 1 for test 3>

<path 2 for test 3>

Question: In the output file, what is the ordering of the tests and paths?

Response:

You should print both tests and paths by their post-CPPR slack. For example, I have test A and test B, and test A has paths A1 and A2, and test B has paths B1 and B2. Let post-CPPR(A) < post-CPPR(B), let post-CPPR(A2) < post-CPPR(A1), and let post-CPPR(B1) < post-CPPR(B2). Your output should be:

<test A>

<path A2>

<path A1>

<test B>

<path B1>

<path B2>

Keep in mind that you should not print paths or tests that have both positive pre-CPPR and post-CPPR slack. If a test has negative pre-CPPR slack but positive post-CPPR slack, it should be considered when printing out the final output.

Question: When comparing the given outputs to my own, there are some numeric mismatches starting in the 4th or 5th decimal places, e.g., 1.0810e-12 vs 1.0809e-12. Is this acceptable?

Response:

This minor difference is most likely due to the use of different internal data structures, e.g., floats vs doubles between your implementation and our output generator. So, these differences should be acceptable.

Question: In the golden output of s526v2.setup (dated Jan. 4th), there is a test starting at DFFR_X2_5:D with:

setup -2.81095e-11 1.60068e-11 5

-2.81095e-11 2.22619e-11 58

Why is there a mismatch between the test slack and path slack?

Response:

This is an interesting corner case where the "most-critical" post-CPPR test slack is (1) positive and (2) originates from an initially-positive pre-CPPR path slack. That is, after CPPR analysis, the initially-positive path received less credit than all the initially-negative (critical) paths, and as a result, became the path with the smallest slack. So, in the strictest sense of accuracy, the post-CPPR test slack is 1.60068e-11. By the contest output guidelines, e.g., no path or test with positive pre-CPPR slack is to be printed, this path does not appear in the output list. Therefore, there is a difference between the path and test slack.

In practice, if a test or path has positive post-CPPR slack (or above a specified threshold), this is no longer important to designers, as this portion of the design no longer needs modification (i.e., no timing violation). As a result, the accuracy beyond this threshold is typically not as important, whether the positive slack reported is 3e-10 or 3e10 (though you may want to check your code if the slack is suddenly gigantic). As part of the contest evaluation, we will honor this mentality in that the exact accuracy of a positive post-CPPR slack will be heavily relaxed.

Note, however, this only applies to tests and paths with negative pre-CPPR slack and positive post-CPPR slack. Tests and paths with positive pre-CPPR slack should not be printed, and tests and paths with both negative pre- and post-CPPR slacks are very much relevant.

Question: What happens if the golden output has a path or test that my output does not? What if the pre-CPPR or post-CPPR slack value does not match?

Response:

There are four relevant cases for the path or test:

1) pre-CPPR slack is positive and post-CPPR slack is positive

We do not consider this case, and is ignored.

2) pre-CPPR slack is positive and post-CPPR slack is negative.

Your post-CPPR slack should not be negative (or smaller) than your pre-CPPR slack, as the credit that you are applying to the path or test is positive. This is an undefined case.

3) pre-CPPR slack is negative and post-CPPR slack is positive.

If the pre-CPPR slacks match, but the post-CPPR slack is different, then you should not worry about accuracy. If you have a test or path in this case that the golden output does not have, you will not be penalized. The reasoning is similar to the above response, where if the path or test slack is positive, then it is less relevant from a user perspective. As part of the contest evaluation, we will honor this mentality in that the presence and accuracy of a test or path with positive post-CPPR slack will be heavily relaxed.

4) pre-CPPR slack is negative and post-CPPR slack is negative.

This is the most important case, and both values (pre-CPPR and post-CPPR slacks) must match (excluding numerical precision differences). If you do not have the path or test, then you will not receive accuracy credit.

Question: What happens if my output has a path or test that the golden output does not?

Response:

In general, you will not explicitly penalized for reporting a false positive case, i.e., where you believe a test or path is critical. For evaluation, we will be matching the golden against your output, so as long as your output encompasses what the golden has, then you should be in good shape. However, you may be implicitly penalized during your reporting if you choose to report a test or path, as it removes a slot for a potentially-matching test or path in the golden output.

Question: I am seeing slack value mismatching on my paths or tests, and I believe these are due to floating point precision differences. Will I still be penalized for this?

Response:

The short answer is that you will not be penalized for minor floating point precision differences. The longer explanation here of the checker reporting these fails is because the slack tolerance is currently set to a fairly-high sensitivity value. The purpose of the checker to primarily validate that you have all the right paths and tests, along with slack values that are very close to the golden. In this sense, we would rather have you look over (and ignore) false positive reports rather than you not be aware of potentially problematic paths or tests. If you like, you may edit the perl script to increase the slackThreshold (line 7) to something like 0.005 or 0.01, and see if you still see the same fails. If you believe that your numeric differences are not be accounted for, please contact us about your specific issue.

While this checker is the backbone of the evaluation script, it is not the final decision in deciding your score, as the final evaluation script will most likely have a different fuzzy factor.