Checkpoint 0

Learning Outcomes

  • Can initialise and configure a new Typescript project using yarn and TSConfig.

  • Can manage project decencies through yarn and package.json

  • Can navigate and extract specifications from a document.

  • Can translate a specification into comprehensive suite of black box tests.

  • Can explain how to arrive at a good test suite and the different measures to assess the quality of the suite.

  • Can read and understand an EBNF.

  • Can write asynchronous program using Promises, and knows its best practices (chaining, rejection handling)

Checkpoint 0: Bootstrapping the Project AND TDD

For this checkpoint, you will read the Checkpoint 1 specification, extract detailed requirements, and then turn the requirements you identified into a set of tests.

Test-driven development (TDD) is one modern technique for building software. As the name suggests, engineers write tests for every requirement in the specification before they create the implementation. This makes it much easier to ensure the final product has at least a base level of testing.

In terms of the course project, adopting TDD will ensure you understand all the requirements of the specification before getting buried in the details of your implementation. This is important because implementing code that doesn't meet the requirements will increase the amount of work you need to do for the project.

In this phase of the project, you will be reading a specification and preparing a repository for its implementation. This includes

  • Configuring your environment

  • Initializing your package.json

  • Adding dependencies

  • Translating the specification into a test suite

To evaluate the completeness of your test suite for the spec we will execute your suite against our own system to measure how well it covers a set of artificially injected mutants in our implementation.

Change Log

Getting the starter code

1) First you will need to log into Classy. Within 24 hours you should have a repository provisioned on.

2) You'll then need to log in to GitHub Enterprise with your CWL account to view and clone your repo (you should have also been emailed a link).

For this course, you will be using Git to manage your code. A description of how to use Git is given in our brief Git Tutorial. Before starting, ensure that you have prepared your computer according the instructions found in the README of your provisioned repo.

Initializing the repo

For C0 you will begin by setting up your development environment and project bootstrap. This will involve carefully reading the Checkpoint 0 and Checkpoint 1 specifications to find the required installations to start your development. You will be graded by AutoTest, and may invoke it via @310-bot #c0. Failing to build due to an unfinished repo bootstrap will not consume your limited AutoTest submission.

Developing Your Solution

You may wish to follow along this example project setup video

Setting up your Packages:

  1. Create your package.json file by running yarn init. Pick a name for your project and then you may use the default options for the remaining fields (by pressing enter).

  2. Add your required packages by using yarn add <package_name>

    • Required Packages: In addition to the required packages for C1 (fs-extra and jszip), C0 requires:

      • mocha and chai for testing your project

      • typescript and @tsconfig/node16 which adds TypeScript and a base configuration we will use at a later step. Bundled with typescript is the command tsc which is used to compile TypeScript code.

      • Type declaration packages for the npm packages that do not have them bundled in the base. These are used to ensure that the TypeScript files properly compile, and include @types/node, @types/fs-extra, @types/mocha and @types/chai.

    • Optional Packages: The following packages are optional

      • ts-node for running TypeScript files directly

      • nyc for coverage reporting

      • chai-as-promised for promise assertion abstraction

      • @ubccpsc310/folder-test for batch testing.

Setting up TSConfig:

Create a new file tsconfig.json in the root of the project. This file will be similar to the example in the TSConfig Bases documentation. However it will

  • extend node16 instead of node12

  • have the compiler option noImplicitAny

If you also configure the include and exclude options of the tsconfig, we recommend that you

  • include the test directory in addition to the src directory

  • not exclude .spec.ts files

Setting up src:

Create a src directory in the root of the project and then a controller directory within it. Create the following files within controller: InsightFacade.ts and IInsightFacade.ts. The contents of IInsightFacade.ts can be found in the Checkpoint 1 specification, and should not be altered. The contents of InsightFacade.ts must include an InsightFacade class that is the default export of the file and implements IInsightFacade.

Setting up test:

Create a test directory in the root of the project and then a controller directory within it. Create a InsightFacade.spec.ts file within controller.

All datasets added for testing must be placed in the test directory.

At this point, you should have reached full marks for the bootstrapping portion of the project.

Evaluation

To ensure that your repo has been set up correctly, we will be checking that the following conditions are met given your project files:

  1. You have created the manifest file where dependencies will be listed (package.json)

    • You must use yarn to manage dependencies in this project instead of npm

  2. You have installed all the required production dependencies.

  3. You have installed any required development dependencies.

    • This may require type declarations for your production dependencies.

  4. You have created a valid tsconfig.json for the project.

    • Your tsconfig must extend the @tsconfig/node16 base tsconfig

    • Your tsconfig must have the additional compiler option: "noImplicitAny": true

    • These are the only two requirements for your tsconfig file, anything else is optional.

  5. Your project builds after we inject a Main file which imports the API as defined in the Checkpoint 1 specification. This means when we run the yarn tsc command in the project directory it should succeed (and it should also succeed when you run it too).

    • This Main file can be seen in the appendices below.

    • You should create missing any classes with stubs implementations of methods to be extended at a future time.

    • You do not need to add this Main file yourself. We will inject the file into your project when it is being assessed by AutoTest.

    • yarn tsc will not work for you until you have added the typescript package.

  6. A test file exists at test/controller/InsightFacade.spec.ts and can be executed.

ESLint (Bonus, ungraded)

After Checkpoint 0, we will be introducing ESLint (see the Specification page for more details). To ensure your test files comply to the linter, you may need to refactor the source files you write during the Checkpoint 0 development period. The best way to get ahead and reduce the refactoring you may need to do later is to write clean and readable code, which is what a linter is meant to assist in.

Getting ESLint running will require installing the eslint package, a few packages to make it compatible with TypeScript, and defining an .eslintrc.js.

Testing

You will be writing unit tests for the four methods of the InsightFacade class. These tests will form the basis of your personal test suite for the remainder of the project. The tests you write for c0 must all be contained within a file at test/controller/InsightFacade.spec.ts , this is what AutoTest will use to run your test suite.

We will specifically invoke only the test file test/controller/InsightFacade.spec.ts. Ensure that this file does not rely on your underlying implementation, as your src/ directory will be replaced.

To evaluate the completeness of your test suite for the spec we will execute your suite against our own system to measure how well it covers a set of artificially injected mutants in our implementation. Your goal is to write a test that will expose and catch the mutant.

We will give you the first mutant for free! This will allow you to verify that everything is setup correctly to kill the mutants.

The first mutant is in the API method listDatasets() and it causes listDatasets() to return at most one dataset. For example, if two datasets are added correctly and then listDatasets() is called it will only return one of the two datasets (Oh No!).

The testing component of your grade will be computed using the following formula which is explained below:

(number of mutants killed / number of mutants to kill)

NOTE: number of mutants to kill < number of mutants injected. You are not required to kill every mutant for a full score, but the more mutants you can spot with your tests, the more thorough your suite likely is.

Because you have no way of checking how well your tests perform against our mutants on your local computer, you will need to rely on AutoTest to determine your progress. This service is rate limited so you will want to start early. Failing to run any tests at all will not consume your limited AutoTest submission.

Developing your solution

We will be using the Mocha Test Environment with Chai Expectations for testing. You may optionally use the chai-as-promised plugin to simplify assertions on PromiseLike objects. We recommend adding the script which requires the package ts-node:
"test": "mocha --require ts-node/register --timeout 10000 test/**/*.spec.ts"
to your package.json so you may execute your tests via yarn test.

You should add tests to test/controller/InsightFacade.spec.ts. Since InsightFacade is not yet implemented, any tests you add should FAIL.

Specifically, your tests should fail when you run them locally against an invalid implementation of InsightFacade. Note that in this project, tests pass by default if you don't include or check assertions in your tests. Therefore, you must make sure you have explicitly defined asserts for every code path in every test and that the asserts check the correct conditions.

To fully test the addDataset method, you'll need to generate additional zip files with differing content. We recommend you place all of your test zip files in test/resources/archives and create helper methods for loading them in your tests.

When writing tests for the performQuery method, you will find that the tests have a common structure: define a query to test and the corresponding results and check that performQuery returns the correct results for the given query. To simplify this process (and to ensure that InsightFacade.spec.ts file doesn't become cluttered) we have created an optional library called folder-test . folder-test allows you to define test queries and results in separate files. These files are used to automatically generate tests that check whether the query returns the correct results. Thus, in addition to writing tests in InsightFacade.spec.ts using it(), you can also write tests for performQuery by creating new JSON files in a directory like test/resources/queries. As you add more valid JSON files to test/resources/queries you'll see that the number of tests that are run increases which is a sign that things are working as expected.

You can also feel free to make tests that don't use these provided structures when making your tests, for example if you think of a scenario that you don't feel is easy to test using our library.

There is a reference UI to help with generating query test results. To make C0 easier, the order of the results from performQuery in the reference UI will be identical to the order of the results returned by our implementation. This might not be the case in future implementations.

Getting your grade

This will be an individual checkpoint (the only one in the project). We will check for copied code on all submissions so please make sure your work is your own. Your project is automatically graded every time you push to GitHub with the command git push. Your grade will be the maximum grade you received from all submissions (commits in git terminology) made before the hard deadline. While we compute your grade on every submission, you will only be able to request to see your grade once every 12 hours. However, invoking AutoTest feedback will not be 'counted' against your if your project has not completed the repo initializing phase, or if no tests are run. You can request your grade by mentioning @310-bot #c0 in a commit comment. Refer to the AutoTest page for additional details.

Please note the project restrictions specified here.

The #c0 grading rubric is given on the project grading page.

Appendix

Resources

FAQ

Q. Lots of this is new to me (especially TypeScript), how should I get caught up?

A. I would start with taking a look at the resource files first, and the links inside them. Google as always is an excellent resource, as most things used in this course are fairly popular and well documented. And don't worry, we expect much (if not most) of this to be new, and that's factored in to the challenge of the initial assignments.

Q. I've logged in to GitHub, but don't have a repo?

A. Make sure to logged into github.students.cs.ubc.ca with your CWL, not github.com or github.ubc.ca. Repo provisioning is done in batches. If you still don't have a repo then, please make a Piazza post with your csid. Also take a look at your profile and see if you see the CPSC310-2022W-T1 organization, you may have a 'team' (of one) in that owns the repo.

Q. I'm currently not able to register in a lab, what should I do?

A. If there's no lab that doesn't conflict with your schedule that has space, you will need to talk to academic advising to have them put you into a lab. In the meantime for the first week, you should go to a lab even if you are not registered in it.

Q. How should I create expected output for performQuery?

A. You can use your reference UI and copy the result output.

Q. Tests were skipped when running my test suite, but is good locally?

A. This is usually caused by an error being thrown while running your tests against our implementation. It could be ill-formatted JSON for the query tests, zip files required by tests that aren't committed to git or incorrect file paths in tests.

Q. AutoTest is timing out with "Container did not complete for c0 in the allotted time", what should I do?

A. AutoTest will timeout if your tests are taking too long to run. A common issue is repeating expensive work that only needs to be done once. For example, adding a dataset is expensive! For performQuery tests, we recommend adding your datasets once in the before rather than repeatedly in the beforeEach.

Q. How do I interpret these clusters in my Mutant feedback from AutoTest?

A. Clusters refer to headers in the Checkpoint 1 specification. If you are missing a mutant in a cluster, it means that the mutant is causing our solution to fail to deliver a requirement described in that named section of the specification.

Injected Main File

To ensure that the correct members of the API are defined and exported correctly, we will add a file to your repo which imports the API before commencing the build. This file as is follows

// Run from `src/Main.ts`
import * as fs from "fs-extra";
import * as zip from "jszip";
import {
IInsightFacade,
InsightDatasetKind,
NotFoundError,
ResultTooLargeError,
InsightError,
InsightDataset,

InsightResult,
} from "./controller/IInsightFacade";
import InsightFacade from "./controller/InsightFacade";

const insightFacade: IInsightFacade = new InsightFacade();

const futureRows: Promise<
InsightResult[]> = insightFacade.performQuery({});
const futureRemovedId: Promise<string> = insightFacade.removeDataset("foo");
const futureInsightDatasets: Promise<InsightDataset[]> = insightFacade.listDatasets();
const futureAddedIds: Promise<string[]> = insightFacade.addDataset("bar", "baz", InsightDatasetKind.Sections);

futureInsightDatasets.then((insightDatasets) => {
const {id, numRows, kind} = insightDatasets[0];
});

const errors: Error[] = [
new ResultTooLargeError("foo"),
new NotFoundError("bar"),
new InsightError("baz"),
];

Testing Promises Examples

The following examples show different ways you can test methods that return promises.

Resolving

it("plain chai", function () {
return additionCalculator.add([1,1])
.then((res) =>
expect(res).to.equal(2));
});

it("chai-as-promised", function () {

const result = additionCalculator.add([1,1]);


return expect(result).eventually.to.
equal(2);

});


it("await", async function () {
const result = await additionCalculator.add([1,1]);


expect(
result).to.equal(2);
});

Chaining

it("plain chai", function () {
return additionCalculator.add([1,1])

.then((res) => {
expect(res).to.equal(2);
return additionCalculator.add([2,2]);

})
.then((res) => expect(res).to.equal(
4));
});

it("chai-as-promised", function () {

const result = additionCalculator.add([1,1])

.then(() => additionCalculator.add([2,2]);


return expect(result).eventually.to.equal(
4);

});


it("await", async function () {
const result1 = await additionCalculator.add([1,1]);

const result2 = await additionCalculator.add([2,2]);


expect(result1).to.equal(2);
expect(result2).to.equal(4);
});

Rejecting

it("plain chai", function () {
return additionCalculator.add([1001,1])
.then((res) => throw new Error(`Resolved with: ${res}`))
.catch((err) => {
expect(err).to.be.instanceof(TooLarge);

});

});

it("chai-as-promised", function () {

const result = additionCalculator.add([1001,1]);
return expect(result).eventually.to.be.rejectedWith(TooLarge);

});


it("await", async function () {

try {
await additionCalculator.add([]);
expect.fail("Should have rejected!");
} catch (err) {
expect(err).to.be.instanceof(TooSimple);
}

});