[Tues Feb 4th] Add section on How autotest works and Unit tests
[Mon Jan 20th] Adds details about grading for C1 onwards. TL;DR you can only receive a grade on work that is merged into the main branch before the deadline through a PR approved by your teammate.
[Mon Jan 13th] Updated information for C0 on the frequency of calls per day (from 2 to 3), and how build failures will result in a consumed call. Build failures are not free.
[Thurs Jan 9th] Added information about #check feedback, then removed #check as it only works for C1-C3 checkpoints.
AutoTest feedback is meant to help you gauge your progress and to give gentle hints if you are stuck. It is not meant as a replacement for good software engineering practices like specification analysis or test suite strengthening, nor is it meant to replace Piazza or office hours!
AutoTest feedback is meant help you progress in your implementation by:
Letting you know if you are on the right track.
Providing hints on where you could work next to improve your implementation and test code.
Giving you a stopping point so you can prioritize other work.
It is also meant to guide you to develop software according to learning goals of this course, including:
Interpreting a specification,
Test-driven development, and
Using feature/dev branches (when applicable).
Your project is automatically graded every time you push or merge to your project's main branch on GitHub.
Your grade will be the maximum grade you received from all submissions made before the deadline.
From Checkpoint 1 onwards, the main branch is protected so you must create and merge a pull request (PR) against main in order for your work to be assessed by the full acceptance test suite (i.e., the bot's private tests). Each pull request must (and can only) be approved by your partner. The git cookbook provides guidance on creating and managing branches. Importantly: having more than 3 branches is considered an anti-pattern, and stale branches should be deleted. When merging a branch into main, please use only the default merge option. If you merge using squash or rebase, the bot will not see a new commit on main and will only provide feature/dev branch feedback.
Note: the only timestamp AutoTest trusts is the timestamp associated with a push or merge event (e.g., when a commit is pushed/branch is merged to the git server). This is the timestamp that will be used for determining whether a commit was made before a deadline (this is because commit timestamps can be modified on the client side, but push timestamps are recorded on the AutoTest server itself). Make sure you push your work to GitHub before any deadline and know that any pushes after the deadline (even if some commits within that push appear to come from before the deadline) will not be considered.
You can request feedback by creating a commit comment which mentions the bot and checkpoint: @310-bot #<checkpoint>. For example, the screenshot below shows a student requesting feedback on checkpoint c0. Your can request feedback 3 times per day. The limit is reset at midnight.
Notes:
AutoBot only responds to comments made on a commit in GitHub; comments in commit messages or on PRs will not work.
There is no way to cancel a request once it has been made.
Calling the bot on a commit that already has the requested checkpoint feedback will not consume a request.
A project that fails build, lint, or prettier on AutoTest WILL consume a request: be sure to always run yarn build before committing!
If AutoTest times out, you will receive a timeout error. A timeout will not consume a request.
It may take more than 12 hours to respond with feedback when AutoTest is under heavy load (typically close to the deadline).
In GitHub, navigate to view a commit and then add a comment at the bottom of the page with the checkpoint. In this example, the checkpoint is c0.
The bot will provide feedback as a follow-up commit comment. The feedback will depends on the checkpoint and whether the feedback was requested for a commit on the main branch.
Free Mutant Cluster Status: The number of free mutants you have caught.
Smoke Test Clusters: Areas of the specification where you still have uncaught mutants. Use this information to strengthen your test suite.
Additional Feedback: If all of your tests run successfully on AutoTest, then your current bucket grade is provided.
Commits on the main branch
Smoke Test Clusters: Areas of the specification where your implementation is insufficient. The clusters are computed by running a subset of the private tests. You should strengthen your tests in the areas indicated.
Additional Feedback: If the statement coverage of your tests on your implementation is >85% then AutoTest will provide your current bucket grade and suggested next steps.
Commits on other branches
AutoTest runs your InsightFacade.spec.ts file against our implementation primarily to help you assess accuracy of your test suite. It can also help you identify any issues that would prevent the bot from providing complete feedback on main including:
Missing Files. Missing files will cause your test suite to fail when we run it.
Performance Hints. If your test suite takes too long, we will notify you of slow addDataset calls, large queries, and unhandled promises.
Note: Any problem identified on your dev branch will also inhibit AutoTest from successfully evaluating your solution when you merge it to main.
Here are a few types of feedback that you will observe and an explanation of what each one means:
Skipped tests are usually caused by an error being thrown while running your tests against our implementation (i.e. a bug in your code). It could be an ill-formatted JSON for the query tests, zip files required by tests that aren't committed to git, incorrect file paths in tests, etc.
There is a bug in your test. Tests can have bugs, just like implementation code! A test has a Setup, Execution and Validation. Now your job is to figure out which one of these is causing the issue.
The Setup contains any code required to get your application into the desired testable state.
The Execution contains your method under tests.
The Validation contains any asserts.
For example:
it("should list one dataset", async function() {
// Setup
await insightFacade.addDataset("ubc", content, InsightDatasetKind.Sections);
// Execution
const datasets = await insightFacade.listDatasets();
// Validation
expect(datasets).to.deep.equal([{
id: "ubc",
kind: InsightDatasetKind.Sections
numRows: 64612
]);
})
Setup
Check each parameter, is it correct? Am I passing in a valid value for each parameter?
Am I doing all the necessary setup for this test? If I'm testing performQuery, have I added the dataset to query?
Am I handling promises correctly?
Execution
Check each parameter, is it correct? Am I passing in a valid value for each parameter?
Am I handling promises correctly?
Validation
This is the only thing you can test locally! First, update your InsightFacade implementation to return the expected result whether it be a promise resolving with a value or a promise rejecting with an error.
For example, below I've updated listDataset to return the expected result for the above test:
listDatasets(): Promise<InsightDataset[]> {
return Promise.resolve([{id: "ubc", kind: InsightDatasetKind.Sections, numRows: 64612}]);
}
After updating your implementation, your test should pass locally.
AutoTest will timeout if your tests are taking too long to run. A common issue is repeating expensive work that only needs to be done once. For example, adding a dataset is expensive! For performQuery tests, we recommend adding your datasets once in a before rather than repeatedly in beforeEach.
In addition, avoid adding the given dataset (pair.zip) when you don't have to. You can create a smaller dataset with only a couple valid sections. This will suffice for the vast majority of your tests.
You have an unhandled promise in your tests! You will need to go through each test and ensure you are handling promises correctly. If you are unsure what it means to "handle" a promise, checkout the async cookbook module.
Missing files are a common source of difficulty, most often caused by files missing from version control, referencing files in a case-insensitive manner, or accessing directories that exist in dev but not in prod (or on a partner's machine). Emitting any form of `ENOENT` error message to the console could trigger these warnings, so be sure to log your file accesses carefully and avoid re-throwing Node's file-related errors or just printing them straight to the console.
This message occurs when a test passes on one run, but fails on another. Most likely, the flakyness is caused by the above issue - Unhandled Promises. As a first step, we recommend you to review your test suite for any unhandled promises.
You will see this error if you have a dynamic test name in your test suite. The names of the failing tests will only be shown if the exact string of the test name is present in your source files.
This means a test written like
it("Should" + " add a valid dataset", function () {...});
will not have its name reported back upon failure, as the test name is created dynamically. (This restriction is in place to prevent grading data exfiltration.)
Whether AutoTest is grading your implementation or giving you feedback on the quality of your test suite, it will follow the same steps at the beginning of each build. The following is a list of steps that AutoTest takes whenever it is called:
Step 1: Statically analyze files to catch potential build errors
Check all required files are in your solution (e.g. package.json, yarn.lock, InsightFacade files).
Check for banned terms (e.g. eval() and synchronous fs-extra methods).
Check that no libraries (dependencies) have been added to your package.json.
Step 2: Replace files for grading
We replace the following files in your implementation with those from AutoTest:
All dependencies: package.json, yarn.lock and node_modules directory.
In the previous step, we check to ensure you haven't added any dependencies to your implementation. If you had, then when we replace your package.json and node_modules directory, those dependencies are no longer available to your implementation. In addition, any version changes to existing dependencies are now removed.
Typescript: tsconfig.json
Any changes you've made to your tsconfig.json are ignored by the grader. This is why we recommend not changing your tsconfig.json file.
Prettier: .pretterrc.js
Step 3: Build project
Run yarn prettier
(Optional) Run yarn lint
See the page on Formatting & Lint for more details on how to modify your eslint settings.
Run yarn build
Step 4: Collect coverage information
Collect coverage information: run yarn cover
Run your tests against your implementation: run yarn test
If your tests are very slow, executing your tests against your implementation can cause the bot to timeout. To view performance hits on your test suite, run the checkpoint deliverable on a development branch (e.g. @310-bot #c0 on branch performQuery).
After executing the above steps, AutoTest is ready to grade your project. To do this, it will run AutoTest's test suite against your implementation. It performs the following steps:
Step 5: Grading
Replace test directory in your project with the test directory from AutoTest.
In the next step, we will build the project. If your src code depends on your test code in any way, then the build will fail.
Build the project: run yarn build
Execute our test suite: run yarn test
Calculate your grade from the number of passing tests, and create feedback to return to the student.
After executing the above steps, AutoTest is ready to asses the quality of your test suite. To do this, it will run your test suite against AutoTest's implementation. It performs the following steps:
Step 5: Test suite execution
Replace the src directory in your project with the src directory from AutoTest
In the next step, we will build the project. If your test code depends on your src code in any way, then the build will fail.
Remove any unit tests from your test directory (e.g. remove all test files except InsightFacade.spec.ts)
Since we are running your tests against AutoTest's implementation, there can be no references to your internal implementation. Your unit tests will reference your internal implementation files (like helper classes for addDataset), which our implementation does not have. We remove your unit tests to avoid build errors in the following step.
Build the project: run yarn build
Execute your test suite: run yarn test
Review the test run for any potential performance issues, and create feedback to return to the student.
If you are still curious about classy and AutoTest, feel free to checkout the open source repo for Classy that 310 depends on: Classy on Github!
To avoid any build issues with AutoTest, we recommend adding your unit tests to a separate directory within your test directory. For example, please add all your unit tests to /test/unit/.