AutoTest feedback is meant help you progress in your implementation by:
Letting you know if you are on the right track.
Providing hints on where you could work next to improve your implementation and test code.
Giving you a stopping point so you can prioritize other work.
It is also meant to guide you to develop software according to learning goals of this course, including:
Interpreting a specification,
Test-driven development, and
Using feature/dev branches (when applicable).
Examples of how to use each type of feedback are detailed below. To ensure you are confident about your submission, you should:
Write your own tests and run them locally them to ensure your implementation is accurate.
Run @310-bot #check to evaluate the accuracy of your test suite. #check will let you know if any of your tests are written incorrectly, as it runs your tests against a reference implementation.
Review the Smoke Test feedback from AutoTest on the main branch (applicable for c1, c2, and c3).
Your project is automatically graded every time you push or merge to your project's main branch on GitHub with the command git push. Your grade will be the maximum grade you received from all submissions made before the hard deadline. From c1 on, the main branch is protected so direct pushes are not possible.
The only timestamp AutoTest trusts is the timestamp associated with a push or merge event (e.g., when a commit is pushed/branch is merged to the git server). This is the timestamp that will be used for determining whether a commit was made before a deadline (this is because commit timestamps can be modified on the client side, but push timestamps are recorded on the AutoTest server itself). Make sure you push your work to GitHub before any deadline and know that any pushes after the deadline (even if some commits within that push appear to come from before the deadline) will not be considered.
You can request to view feedback on your latest submission by creating a comment with @310-bot #<deliverable_id> on a commit (see figure below for an example for the c0 deliverable). You may request feedback twice per calendar day--within minutes of each other, or at 10am and 10pm, or at any other two times per day. The choice is yours! Beware: There is no way to cancel a request, so make the requests carefully. A project that fails to build on AutoTest will NOT consume one of these requests.
AutoTest feedback is meant to help you gauge your progress and to give gentle hints if you are stuck. It is not meant as a replacement for good software engineering practices like specification analysis or test suite strengthening, nor is it meant to replace Piazza or office hours!
Note that feedback may take more than 12 hours to be returned, especially close to the deadline when AutoTest is under heavy load. If you wish to make the most of this feedback, we recommend you start earlier (when it will likely be returned within seconds!). AutoTest runs pre-emptively in the background, so when load is low AutoTest may be able to return your result immedeately. If request an already-run result the cached version will be returned, AutoTest will not run again (you will only be 'charged' once for a given <deliverable, SHA> tuple).
Your code can only be graded (and feedback can only be given) if your code builds (which you can verify with yarn build), passes prettier (which you can verify with yarn prettier:check), and passes lint (which you can verify with yarn lint:check). Obviously non-building code cannot run, but trying to disable prettier or lint (either through configuration files or in-code directives) can cause your code not to run. Please don't try to do this! If this happens to you though, AutoTest will be clear that the code did not run, so fixing the problem should be relatively straightforward.
In GitHub, navigate to view a commit and then add a comment at the bottom of the page with the deliverable id. In this example, the deliverable id is c0.
AutoTest will respond with feedback like this. The Additional Feedback will only be shown if all the tests in your repo is in a valid state (e.g., the tests you are trying to run complete successfully). That said, even if the additional feedback is not shown to you, your grade is being computed in the background.
Description: On the main branch, you will receive information on how your implementation performs against our Smoke Test Suite. The Smoke Test Suite is comprised of basic tests, chosen as a minimal representative test case in the cluster. If you are failing a smoke test, it is likely that you are failing several more acceptance tests in the cluster, and indicates that you should focus on strengthening your test suite in that area. Smoke tests are a subset of the full, private acceptance Client Test Suite.
Furthermore, you may receive Additional Feedback if your own test suite covers more than 85% of your implementation code (calculated using statement coverage). This is meant to guide you on what to work on next while also encouraging you to improve and maintain your local test suite.
Command: comment @310-bot #<deliverable_id> on a commit in the main branch (comments on PRs will NOT work).
Frequency: You will only be able to request #<deliverable_id> feedback twice per calendar day (per person). You can consume your two grade requests within minutes of each other, or at 10am and 10pm, or at any other two times per day. The choice is yours! Beware: There is no way to cancel a request, so make the requests carefully.
Tips: If a smoke test is failing, it means that your suite is NOT strong enough to catch it and should be strengthened by closely examining the specification for areas you are not yet covering sufficiently. The #check command described below is a great way to help you strengthen your test suite!
If your code does not pass preliminary checks (including formatting, linting, compiling) AutoTest will not run the rest of the commit. Note though: this should never happen because you can verify that your code builds and passes lint locally before committing and pushing to AutoTest (e.g., by running yarn build).
Example of c1 main branch smoke test feedback where the solution passed that 85% coverage threshold to see the additional feedback.
Description: AutoTest provides more limited feedback when it is invoked on development branches, but this feedback will still be sufficient for you to ensure your code will build correctly when graded.
Command: comment @310-bot #<deliverable_id> on a commit on a non-main branch.
Frequency: Counted within the #<deliverable_id> request budget described above for the main branch.
You can invoke the #check command by commenting @310-bot #check on a commit. This will provide a subset of the #c0 feedback, and is meant to help you develop and strengthen your test suite. Specifically, it will give you feedback on the following aspects of your test suite:
Description: Provides additional information about your integration test suite (NOT your implementation). #check will run your InsightFacade.spec.ts file against our implementation, and report:
Missing Files. Missing files will cause your test suite to fail when we run it.
Test Feedback. How your tests perform against the reference implementation. This feedback is useful for determining if your tests are accurate (e.g. you are understanding the specification correctly).
Performance Hints. If your test suite takes too long, we will notify you of slow addDataset calls, large queries, and unhandled promises.
Command: comment @310-bot #check on a commit on any branch (including main).
Frequency: 4 times per day.
An example of AutoTest feedback when using the #check command on your tests!
Here are a few types of feedback that you will observe and an explanation of what each one means:
Skipped tests are usually caused by an error being thrown while running your tests against our implementation (i.e. a bug in your code). It could be an ill-formatted JSON for the query tests, zip files required by tests that aren't committed to git, incorrect file paths in tests, etc.
There is a bug in your test. Tests can have bugs, just like implementation code! A test has a Setup, Execution and Validation. Now your job is to figure out which one of these is causing the issue.
The Setup contains any code required to get your application into the desired testable state.
The Execution contains your method under tests.
The Validation contains any asserts.
For example:
it("should list one dataset", async function() {
// Setup
await insightFacade.addDataset("ubc", content, InsightDatasetKind.Sections);
// Execution
const datasets = await insightFacade.listDatasets();
// Validation
expect(datasets).to.deep.equal([{
id: "ubc",
kind: InsightDatasetKind.Sections
numRows: 64612
]);
})
Setup
Check each parameter, is it correct? Am I passing in a valid value for each parameter?
Am I doing all the necessary setup for this test? If I'm testing performQuery, have I added the dataset to query?
Am I handling promises correctly?
Execution
Check each parameter, is it correct? Am I passing in a valid value for each parameter?
Am I handling promises correctly?
Validation
This is the only thing you can test locally! First, update your InsightFacade implementation to return the expected result whether it be a promise resolving with a value or a promise rejecting with an error.
For example, below I've updated listDataset to return the expected result for the above test:
listDatasets(): Promise<InsightDataset[]> {
return Promise.resolve([{id: "ubc", kind: InsightDatasetKind.Sections, numRows: 64612}]);
}
After updating your implementation, your test should pass locally.
AutoTest will timeout if your tests are taking too long to run. A common issue is repeating expensive work that only needs to be done once. For example, adding a dataset is expensive! For performQuery tests, we recommend adding your datasets once in a before rather than repeatedly in beforeEach.
In addition, avoid adding the given dataset (pair.zip) when you don't have to. You can create a smaller dataset with only a couple valid sections. This will suffice for the vast majority of your tests.
You have an unhandled promise in your tests! You will need to go through each test and ensure you are handling promises correctly. If you are unsure what it means to "handle" a promise, checkout the async cookbook module.
Missing files are a common source of difficulty, most often caused by files missing from version control, referencing files in a case-insensitive manner, or accessing directories that exist in dev but not in prod (or on a partner's machine). Emitting any form of `ENOENT` error message to the console could trigger these warnings, so be sure to log your file accesses carefully and avoid re-throwing Node's file-related errors or just printing them straight to the console.
This message occurs when a test passes on one run, but fails on another. Most likely, the flakyness is caused by the above issue - Unhandled Promises. As a first step, we recommend you to review your test suite for any unhandled promises.
You will see this error if you have a dynamic test name in your test suite. The names of the failing tests will only be shown if the exact string of the test name is present in your source files.
This means a test written like
it("Should" + " add a valid dataset", function () {...});
will not have its name reported back upon failure, as the test name is created dynamically. (This restriction is in place to prevent grading data exfiltration.)