Can initialize and configure a new Typescript project using yarn and TSConfig.
Can manage project dependencies through yarn and package.json.
Can navigate and extract specifications from a document.
Can translate a specification into comprehensive suite of black box tests.
Can explain how to arrive at a good test suite and the different measures to assess the quality of the suite.
Can read and understand an EBNF specification.
Can write asynchronous program using Promises, and knows its best practices (chaining, rejection handling).
Nothing yet!
UBC is a big place, and involves a large number of people doing a variety of tasks. The goal of this project is to provide a way to perform tasks required to run the university and to enable effective querying of the metadata from around campus. This will involve working with courses, sections, and rooms.
This will be a full stack web development project split into four sprints. The first three sprints are server-side development using Node. The fourth sprint is client-side development.
The vast majority of software is written by development teams. Even within large organizations, 'feature teams' usually comprise of a small set of developers within a larger team context. The first checkpoint, C0, will be completed individually. But, for the final three project checkpoints you will work in pairs. Your partner must be in the same lab section as you. If you want to work with someone who is in another section, one of you will have to transfer lab sections.
Your partner selection is extremely important. Be sure to make this choice carefully, as you will be responsible for working as a team for the remainder of the term. You must use the same partner for the duration of the project, no changes will be permitted. If you do not have a team organized by the Checkpoint 0 deadline, TAs will find you a partner during your first lab after the C0 deadline.
TypeScript. Your project will be constructed in TypeScript. If you do not know TypeScript, you are encouraged to start investigating the language soon. It is important to note that we will spend very little time in lecture and lab teaching this language. You will be expected to learn TypeScript on your own time.
While it might seem daunting to learn a new language on your own, the fluid nature of software systems requires that you get used to quickly learning new languages, frameworks, and tools. The syntax of TypeScript is similar to Java, which you used in 210. Google will be your friend for this project, as there are thousands of free tutorials and videos that can help you with this technology stack. TypeScript has many great resources, but the TypeScript Handbook or the TypeScript Deep Dive would be good places to start. If you are starting from scratch, it is really important that you do not just read a bunch of code but actually write some, too! Consider using TypeScript or JavaScript REPLs as a lightweight way to do this.
Git. All of your project development will take place on GitHub. You will not be able to change your GitHub ID during the term, so do not change your CWL until after the final exam. Being familiar with Git is essential. Please take a look at the 'getting started' part of the Atlassian Git Introduction before the first lab if you are not familiar with Git. A shorter, less formal, guide is also available.
Allowable packages. The packages and external libraries (i.e., all of the code you did not write yourself) you can use for the project are limited and described in each checkpoint. You cannot install any additional packages. Essentially if you are typing something like npm install <packagename> or yarn add <packagename> and we haven't explicitly asked you to do it, you will likely encounter problems.
The Project Grading page details how each checkpoint contributes to your overall course project grade.
All Checkpoint Deadlines are on the Schedule page.
Test-driven development (TDD) is one modern technique for building software. As the name suggests, engineers write tests for every requirement in the specification before they create the implementation. This makes it easier to ensure that the final product has a base level of testing.
Adopting TDD for the course project will help ensure that you understand all the requirements of the specification before getting buried in the details of your implementation. This is important because implementing code that doesn't meet the requirements will increase the amount of work you need to do for the project.
For Checkpoint 0, you will do two things:
Initialize Your Repository. You will bootstrap a node.js project.
Develop Your Test Suite. You will use Test Driven Development to build a test suite against the insightUBC Section's Specification.
Labs are NOT mandatory during Checkpoint 0. All labs will act like office hours, so you can drop into any lab to get help from a TA.
If you are currently not registered in a lab, register in a lab! Labs are required for future checkpoints. If there's no lab that works with your schedule that has space, you will need to talk to academic advising to have them put you into a lab.
This will be an individual checkpoint (the only one in the project). We will check for copied code on all submissions so please make sure your work is your own.
Your grade for C0 will be:
20% for Initializing Your Repository
80% for Developing Your Test Suite
AutoTest is our system for evaluating your projects, and serves as the "client" for the project. AutoTest is a friendly bot who grades your project on GitHub automatically and provides feedback on your progress.
For C0, your project is automatically graded every time you push to GitHub with the command git push. Your grade will be the maximum grade you received from all submissions (commits in git terminology) made before the hard deadline.
You can request to see your grade by creating a comment with @310-bot #c0 on a commit (see figure below). We compute your grade on every submission, but you will only be able to request to see your grade once every 12 hours. Failing to build your project due to an unfinished repo bootstrap will not consume your limited AutoTest submission. Note that feedback may take more than 12 hours to be returned, especially close to the deadline when AutoTest is under heavy load.
The only timestamp AutoTest trusts is the timestamp associated with a git push event (e.g., when a commit is pushed to the git server). This is the timestamp that will be used for determining whether a commit was made before a deadline (this is because commit timestamps can be modified on the client side, but push timestamps are recorded on the AutoTest server itself). Make sure you push your work to GitHub before any deadline and know that any pushes after the deadline (even if some commits within that push appear to come from before the deadline) will not be considered.
In GitHub, navigate to view a commit and then add a comment (at the bottom of the page).
To ensure that your repo has been initialized correctly, we will be checking that the following conditions are met:
You have created the manifest file where dependencies will be listed (package.json).
You have installed all the required dependencies.
This may require type declarations.
You have created a valid tsconfig.json for the project.
Your tsconfig must extend the @tsconfig/node18 base tsconfig.
Your tsconfig must have the additional compiler option: noImplicitAny.
These are the only two requirements for your tsconfig file, anything else is optional.
Your project builds after we inject a Main.ts file which imports the API as defined in the insightUBC Section's Specification. This means that when we run the yarn tsc command in the project directory, the command should succeed (and it should also succeed when you run it too!).
This Main.ts file can be seen in the Appendix below.
You do not need to add this Main.ts file yourself. We will inject the file into your project when it is being assessed by AutoTest.
A test file exists at test/controller/InsightFacade.spec.ts and can be executed.
The tests you write for C0 must all be contained within a file called test/controller/InsightFacade.spec.ts, this is what AutoTest will use to run your test suite. We will only invoke this test file.
To evaluate the completeness of your test suite for the spec we will execute your suite against our own system to measure how well it covers a set of artificially injected mutants in our implementation. Your goal is to write a test that will expose and catch the mutant.
This component of your grade will be computed using the following formula, which is explained further below:
(number of mutants killed / number of mutants to kill)
The number of mutants to kill IS LESS THAN the total number of mutants injected. You DO NOT need to kill every mutant for a full score, but the more mutants you can spot with your tests, the more thorough your suite likely is.
Free Mutant. We give you one free mutant and big hints about a second mutant. These freebies are written at the end of the Developing Your Test Suite section.
AutoTest will give you detailed error messages about issues with your project initialization. Failing to build due to an unfinished repo bootstrap will not consume your limited AutoTest submission. Here is an example of this:
An example of AutoTest Feedback when Initializing Your Repository.
After your project has been successfully initialized, AutoTest will evaluate your test suite by determining how many of our mutants you have killed. Note that we will not reveal our set of mutants to you (besides the one freebie mentioned above). The feedback from AutoTest that you will receive will be:
Your grade on this commit.
Cluster information about the location of mutants and which ones you have killed. Clusters refer to headers in this document. If you are missing a mutant in a cluster, it means you do not have a test for a requirement described in that named section of the specification.
Test information, which includes any of your tests that failed against our implementation, any skipped tests and the coverage of your tests against our implementation (shown as a Coverage Score). Any tests that fail against our implementation will not be included in the mutation testing pass.
Below is an example of the feedback you will receive:
An example of AutoTest feedback when you are trying to kill those pesky mutants!
Here are a few types of feedback that you will observe and an explanation of what each one means:
You will see this error if you have a dynamic test name in your test suite. The names of the failing tests will only be shown if the exact string of the test name is present in your source files.
This means a test written like
it("Should" + " add a valid dataset", function () {...});
will not have its name reported back upon failure, as the test name is created dynamically. (This restriction is in place to prevent grading data exfiltration.)
Skipped tests are usually caused by an error being thrown while running your tests against our implementation (i.e. a bug in your code). It could be an ill-formatted JSON for the query tests, zip files required by tests that aren't committed to git, incorrect file paths in tests, etc.
There is a bug in your test. Tests can have bugs, just like implementation code! A test has a Setup, Execution and Validation. Now your job is to figure out which one of these is causing the issue.
The Setup contains any code required to get your application into the desired testable state.
The Execution contains your method under tests.
The Validation contains any asserts.
For example:
it("should list one dataset", async function() {
//Setup
await insightFacade.addDataset("ubc", content, InsightDatasetKind.Sections);
//Execution
const datasets = await insightFacade.listDatasets();
//Validation
expect(datasets).to.deep.equal([{
id: "ubc",
kind: InsightDatasetKind.Sections
numRows: 64612
]);
})
Setup
Check each parameter, is it correct? Am I passing in a valid value for each parameter?
Am I doing all the necessary setup for this test? If I'm testing performQuery, have I added the dataset to query?
Am I handling promises correctly?
Execution
Check each parameter, is it correct? Am I passing in a valid value for each parameter?
Am I handling promises correctly?
Validation
This is the only thing you can test locally! First, update your InsightFacade implementation to return the expected result whether it be a promise resolving with a value or a promise rejecting with an error.
For example, below I've updated listDataset to return the expected result for the above test:
listDatasets(): Promise<InsightDataset[]> {
return Promise.resolve([{id: "ubc", kind: InsightDatasetKind.Sections, numRows: 64612}]);
}
After updating your implementation, your test should pass locally.
AutoTest will timeout if your tests are taking too long to run. A common issue is repeating expensive work that only needs to be done once. For example, adding a dataset is expensive! For performQuery tests, we recommend adding your datasets once in a before rather than repeatedly in beforeEach. In addition, avoid adding the given dataset (pair.zip) when you don't have to. You can create a smaller dataset with only a couple valid sections (not 60k!) by deleting a lot of files from pair.zip and using this for most of your add, list or remove dataset tests.
You have an unhandled promise in your tests! You will need to go through each test and ensure you are handling promises correctly. If you are unsure what it means to "handle" a promise, checkout the async cookbook module.
This message occurs when a test passes on one run, but fails on another. Most likely, the flakyness is caused by the above issue - Unhandled Promises. As a first step, we recommend you to review your test suite for any unhandled promises.
Step 1: Log into Classy. Within 24 hours you should have a repository provisioned.
Step 2: Log into GitHub Enterprise with your CWL account to view and clone your repo (you should have also been emailed a link).
If you cannot find your repo:
Make sure to logged into github.students.cs.ubc.ca with your CWL, not github.com or github.ubc.ca.
Repo provisioning is done in batches every 24 hours. If you still don't have a repo after 24 hours, please make a Piazza post with your csid.
Also take a look at your profile and check if you see the CPSC310-2023W-T1 organization, you must have a team in that organization that owns the repo.
For this course, you will be using Git to manage your code. A description of how to use Git is given in our brief Git Cookbook.
Important: Before continuing with instructions below, ensure that you have prepared your computer according the instructions found in the README of your provisioned repo.
As mentioned in the requirements, there are lots of tools required to develop this project (node.js, yarn, etc). You may or may not have these tools already installed on your machine. Setting up an environment for a project is time-consuming, do not underestimate the time it will take to install the required tools and to get them to cooperate. It is your responsibility to determine what tools are missing from your machine and to install them.
For this project we will use yarn, a package manager, that communicates with the npm package repository to use packages written by other developers. A package is a library or framework that we can add to our project to make our lives easier. For example, this project requires the use of the testing framework package Mocha. All information regarding a project's packages can be found in its package.json file - what packages are required for this project? What version does that packages need to be?
Let's get started with adding packages to our project!
Step 1: Run yarn init to create your package.json file. yarn init will walk you through the required parts of a package.json file. Pick a name for your project and then you may use the default options for the remaining fields (by pressing enter).
Step 2: Add the required packages by using yarn add <package_name>
typescript and @tsconfig/node18 which adds TypeScript and a base configuration we will use at a later step. Bundled with typescript is the command tsc which is used to compile TypeScript code.
mocha and chai for testing your project.
fs-extra for reading and writing files to disk.
jszip for processing zip files.
chai-as-promised for promise assertion abstraction.
Type declaration packages for packages that do not have them bundled in the base: @types/node, @types/fs-extra, @types/mocha, @types/chai, and @types/chai-as-promised. These are used to ensure that the TypeScript files properly compile.
ts-node for running TypeScript files directly.
(Highly recommended, but optional) @ubccpsc310/folder-test for batch testing.
(Optional) nyc for coverage reporting.
Your implementation of the logic required to satisfy the project specification lives in the src directory of your project repository.
Step 1: Create a src directory in the root of the project.
All files that test your implementation live in the test directory of your project repository.
Step 1: Create a test directory in the root of the project.
Typescript requires a TSConfig to run. To setup your TSConfig:
Step 1: Create a new file tsconfig.json in the root of the project.
Step 2: Write the contents of your tsconfig.json file. We recommend using this example in the TSConfig Bases documentation. However, you will need to apply the following changes:
Extend node18 instead of node12
Have the compiler option noImplicitAny
Have the compiler option "outDir": "dist". This causes all of your build files to go in a separate directory called dist.
If you also configure the include and exclude options of the tsconfig, we recommend that you:
Include the test directory in addition to the src directory
Do not exclude your test files (files appended with .spec.ts)
We recommend adding a .gitignore file to the root of your project directory. This file lets git know which files and folders to ignore. git will not track the changes to these files or folders.
We recommend having the following in this file. Note, you can add to this as you see fit!
# node_modules contain the code for the dependencies that your project relies on
node_modules/*
# IntelliJ environment files (not necessary if you aren't using IntelliJ)
.idea/*
# insightUBC project
# Checkout Section 5.5 of C0 for more information
data/*
# Build files
dist/*
Congrats, your repository is now ready for development! However from a grading perspective, you have not yet finished Initializing your Repository. Before you can finish Initializing Your Repository, you will need to learn about the insightUBC project. Afterwards, you can complete the necessary steps.
UBC has a wide variety of courses. Manually viewing information about the courses is painful and slow. Students and Professors (users) would like to be able to query information about courses to gain insights into UBC. InsightUBC will provide a way for users to manage their course section data and query this data for insights.
Users will interact with your project through a fixed API, defined through a provided interface, IInsightFacade.ts. Very important: do not alter the given API (interface) in any way, as it is used to grade your project!
The interface provides four methods: addDataset, listDataset, removeDataset, and performQuery. Users will manage their course section data through the methods addDataset, listDataset and removeDataset and users will query their data using the method performQuery.
The contents of the API file, IInsightFacade.ts, is given below in the IInsightFacade.ts section. Read the entire file carefully - it contains details about the expected parameters, what the methods should do and the error responses for failures.
For example, a user might write the following code to use your API:
function getNumberOfSectionsInUBCDataset(): Promise<number> {
return fs.readFile("src/resources/archives/ubc-sections.zip")
.then((buffer) => buffer.toString("base64"))
.then((content) => new InsightFacade().addDataset("ubc", content,
InsightDatasetKind.Sections))
.then(() => new InsightFacade().listDatasets())
.then((datasets) => datasets.find((dataset) => dataset.id === "ubc"))
.then((dataset) => dataset!.numRows)
.catch((error) => -1);
}
We allow users to perform three actions for managing their data:
Adding a dataset, so it is available for querying.
Listing all datasets that are available to query.
Removing a dataset, so it is no longer available for query.
Each of these actions has a corresponding API method defined in IInsightFacade.ts.
Without data, there is nothing to search through for insights! Before a user can query, they will need to add data to the system. All valid course sections should be extracted from the dataset and stored such that they can later be queried.
The following method is defined in the IInsightFacade.ts interface file:
addDataset(id: string, content: string, kind: InsightDatasetKind): Promise<string[]> adds a dataset to the internal model, providing the id of the dataset, the string of the content of the dataset, and the kind of the dataset. Any invalid inputs should be rejected.
Each of the three arguments to addDataset are described below:
A user can add multiple datasets to your project and they will be identified by the ID provided by the user. A valid id is an idstring, defined in the EBNF (see below). In addition, an id that is only whitespace is invalid.
The content parameter is the entire zip file, in the format of a base64 string. That's the entire zip file, all the data you need is contained in it. You should use the JSZip module to unzip, navigate through, and view the files inside.
A valid dataset:
Is a zip file.
Contains at least one valid section.
A valid course:
Is a JSON formatted file.
Contains one or more valid sections.
Within a JSON formatted file, valid sections will be found within the "result" key.
Is located within a folder called courses/ in the zip's root directory.
A valid section:
Contains every field which can be used by a query (see the "Valid Query Keys" section below).
If a field you use in a section is present in the JSON but contains something counter-intuitive like empty string, it is valid.
An example of a valid dataset which contains 64,612 valid UBC course sections can be found here. This data has been obtained from UBC PAIR and has not been modified in any way. The data is provided as a zip file: inside of the zip you will find a file for each of the courses offered at UBC. Each of those file contains a JSON object containing the information about each section of the course.
Unzip the example valid dataset to see what a valid JSON formatted file looks like. You can use an online JSON formatter to more easily view the JSON file contents.
For this checkpoint the dataset kind will be sections, and the rooms kind is invalid.
Users would like to be able to remove datasets that were previously added successfully.
The following method is defined in the IInsightFacade.ts interface file:
removeDataset(id: string): Promise<string> removes a dataset from the internal model, given the id.
A valid id is an idstring, as defined in the EBNF (see below). Like with addDataset, an id that is only whitespace is invalid. In addition, removing a nonexistent id should be rejected.
Users would like to be able to list all available datasets for querying.
The following method is defined in the IInsightFacade.ts interface file:
listDatasets(): Promise<InsightDataset[]> returns an array of currently added datasets. Each element of the array should describe a dataset following the InsightDataset interface which contains the dataset id, kind, and number of rows.
After a user has added a dataset, they should be able to query that dataset for insights.
The following method is defined in the IInsightFacade.ts interface file:
performQuery(query: unknown): Promise<InsightResult[]> performs a query on the dataset. It first should parse and validate the input query, then perform semantic checks on the query and evaluate the query only if it is valid.
Since the type for the query is unknown, technically anything could be passed. A valid query will be an object matching the EBNF. Note that this is not a stringified object, it will be already in object form (the example queries in the specification could be copy pasted and would be valid).
A valid query:
Is based on the given EBNF (defined below)
Only references one dataset (via the query keys).
Has less than or equal to 5000 results. If this limit is exceeded the query should reject with a ResultTooLargeError
Queries to the system should be JavaScript objects structured according to the following grammar (represented in EBNF):
WHERE defines which sections should be included in the results.
COLUMNS defines which keys should be included in each result.
ORDER defines what order the results should be in.
QUERY ::='{' BODY ', ' OPTIONS '}'
// Note: a BODY with no FILTER (i.e. WHERE:{}) matches all entries.
BODY ::= 'WHERE:{' FILTER? '}'
FILTER ::= LOGICCOMPARISON | MCOMPARISON | SCOMPARISON | NEGATION
LOGICCOMPARISON ::= LOGIC ':[' FILTER_LIST ']'
MCOMPARISON ::= MCOMPARATOR ':{' mkey ':' number '}'
SCOMPARISON ::= 'IS:{' skey ': "' [*]? inputstring [*]? '" }' // Asterisks at the beginning or end of the inputstring should act as wildcards.
NEGATION ::= 'NOT :{' FILTER '}'
FILTER_LIST ::= '{' FILTER '}' | '{' FILTER '}, ' FILTER_LIST // comma separated list of filters containing at least one filter
LOGIC ::= 'AND' | 'OR'
MCOMPARATOR ::= 'LT' | 'GT' | 'EQ'
OPTIONS ::= 'OPTIONS:{' COLUMNS '}' | 'OPTIONS:{' COLUMNS ', ORDER:' key '}'
COLUMNS ::= 'COLUMNS:[' KEY_LIST ']'
KEY_LIST ::= key | key ', ' KEY_LIST // comma separated list of keys containing at least one key
key ::= mkey | skey
mkey ::= '"' idstring '_' mfield '"'
skey ::= '"' idstring '_' sfield '"'
mfield ::= 'avg' | 'pass' | 'fail' | 'audit' | 'year'
sfield ::= 'dept' | 'id' | 'instructor' | 'title' | 'uuid'
idstring ::= [^_]+ // One or more of any character, except underscore.
inputstring ::= [^*]* // Zero or more of any character, except asterisk.
Wildcards are the optional asterisks in SCOMPARISON. For Example, "IS": {"sections_dept": "C*"}, this would be looking for any course department that starts with a C. Because both are optional, there are four possible configurations:
inputstring: Matches inputstring exactly
*inputstring: Ends with inputstring
inputstring*: Starts with inputstring
*inputstring*: Contains inputstring
There are no "in the middle" asterisks, such as input*string, allowed.
If ORDER is not specified, any order of the correct results if fine, there is no 'default'. Also, the ORDER's key must be a key found in the COLUMNS 's KEY_LIST array.
For instance, consider testing your results with a combination of expect(res).to.deep.equal(expected), as well as checking the result lengths. This will ensure you have the same results, ignoring any order.
Tie Breaks
Often, you will find cases where the field being sorted on appears multiple times, for example if you sort by department, many entries may be CPSC. The order within those CPSC rows does not matter.
In the above EBNF, the query keys are the mkey and skey. As defined in the EBNF, a valid query key has two parts, separated by an underscore: <idstring>_<mfield | sfield>.
idstring is the dataset ID provided by the user when they add the dataset (the id parameter).
mfield | sfield is the column key that represents a piece of information about the course.
For example, if a user has added a dataset with the id ubc-courses, than a valid query key is ubc-courses_avg.
The following table lists the valid dataset keys (mfield | sfield) and their expected formats:
{
"WHERE":{
"GT":{
"sections_avg":97
}
},
"OPTIONS":{
"COLUMNS":[
"sections_dept",
"sections_avg"
],
"ORDER":"sections_avg"
}
}
The result for this would look like:
[
{ "sections_dept": "math", "sections_avg": 97.09 },
{ "sections_dept": "math", "sections_avg": 97.09 },
{ "sections_dept": "epse", "sections_avg": 97.09 },
{ "sections_dept": "epse", "sections_avg": 97.09 },
{ "sections_dept": "math", "sections_avg": 97.25 },
{ "sections_dept": "math", "sections_avg": 97.25 },
{ "sections_dept": "epse", "sections_avg": 97.29 },
{ "sections_dept": "epse", "sections_avg": 97.29 },
{ "sections_dept": "nurs", "sections_avg": 97.33 },
{ "sections_dept": "nurs", "sections_avg": 97.33 },
{ "sections_dept": "epse", "sections_avg": 97.41 },
{ "sections_dept": "epse", "sections_avg": 97.41 },
{ "sections_dept": "cnps", "sections_avg": 97.47 },
{ "sections_dept": "cnps", "sections_avg": 97.47 },
{ "sections_dept": "math", "sections_avg": 97.48 },
{ "sections_dept": "math", "sections_avg": 97.48 },
{ "sections_dept": "educ", "sections_avg": 97.5 },
{ "sections_dept": "nurs", "sections_avg": 97.53 },
{ "sections_dept": "nurs", "sections_avg": 97.53 },
{ "sections_dept": "epse", "sections_avg": 97.67 },
{ "sections_dept": "epse", "sections_avg": 97.69 },
{ "sections_dept": "epse", "sections_avg": 97.78 },
{ "sections_dept": "crwr", "sections_avg": 98 },
{ "sections_dept": "crwr", "sections_avg": 98 },
{ "sections_dept": "epse", "sections_avg": 98.08 },
{ "sections_dept": "nurs", "sections_avg": 98.21 },
{ "sections_dept": "nurs", "sections_avg": 98.21 },
{ "sections_dept": "epse", "sections_avg": 98.36 },
{ "sections_dept": "epse", "sections_avg": 98.45 },
{ "sections_dept": "epse", "sections_avg": 98.45 },
{ "sections_dept": "nurs", "sections_avg": 98.5 },
{ "sections_dept": "nurs", "sections_avg": 98.5 },
{ "sections_dept": "nurs", "sections_avg": 98.58 },
{ "sections_dept": "nurs", "sections_avg": 98.58 },
{ "sections_dept": "epse", "sections_avg": 98.58 },
{ "sections_dept": "epse", "sections_avg": 98.58 },
{ "sections_dept": "epse", "sections_avg": 98.7 },
{ "sections_dept": "nurs", "sections_avg": 98.71 },
{ "sections_dept": "nurs", "sections_avg": 98.71 },
{ "sections_dept": "eece", "sections_avg": 98.75 },
{ "sections_dept": "eece", "sections_avg": 98.75 },
{ "sections_dept": "epse", "sections_avg": 98.76 },
{ "sections_dept": "epse", "sections_avg": 98.76 },
{ "sections_dept": "epse", "sections_avg": 98.8 },
{ "sections_dept": "spph", "sections_avg": 98.98 },
{ "sections_dept": "spph", "sections_avg": 98.98 },
{ "sections_dept": "cnps", "sections_avg": 99.19 },
{ "sections_dept": "math", "sections_avg": 99.78 },
{ "sections_dept": "math", "sections_avg": 99.78 }
]
{
"WHERE":{
"OR":[
{
"AND":[
{
"GT":{
"ubc_avg":90
}
},
{
"IS":{
"ubc_dept":"adhe"
}
}
]
},
{
"EQ":{
"ubc_avg":95
}
}
]
},
"OPTIONS":{
"COLUMNS":[
"ubc_dept",
"ubc_id",
"ubc_avg"
],
"ORDER":"ubc_avg"
}
}
The result of this query would be:
[
{ "ubc_dept": "adhe", "ubc_id": "329", "ubc_avg": 90.02 },
{ "ubc_dept": "adhe", "ubc_id": "412", "ubc_avg": 90.16 },
{ "ubc_dept": "adhe", "ubc_id": "330", "ubc_avg": 90.17 },
{ "ubc_dept": "adhe", "ubc_id": "412", "ubc_avg": 90.18 },
{ "ubc_dept": "adhe", "ubc_id": "330", "ubc_avg": 90.5 },
{ "ubc_dept": "adhe", "ubc_id": "330", "ubc_avg": 90.72 },
{ "ubc_dept": "adhe", "ubc_id": "329", "ubc_avg": 90.82 },
{ "ubc_dept": "adhe", "ubc_id": "330", "ubc_avg": 90.85 },
{ "ubc_dept": "adhe", "ubc_id": "330", "ubc_avg": 91.29 },
{ "ubc_dept": "adhe", "ubc_id": "330", "ubc_avg": 91.33 },
{ "ubc_dept": "adhe", "ubc_id": "330", "ubc_avg": 91.33 },
{ "ubc_dept": "adhe", "ubc_id": "330", "ubc_avg": 91.48 },
{ "ubc_dept": "adhe", "ubc_id": "329", "ubc_avg": 92.54 },
{ "ubc_dept": "adhe", "ubc_id": "329", "ubc_avg": 93.33 },
{ "ubc_dept": "sowk", "ubc_id": "570", "ubc_avg": 95 },
{ "ubc_dept": "rhsc", "ubc_id": "501", "ubc_avg": 95 },
{ "ubc_dept": "psyc", "ubc_id": "501", "ubc_avg": 95 },
{ "ubc_dept": "psyc", "ubc_id": "501", "ubc_avg": 95 },
{ "ubc_dept": "obst", "ubc_id": "549", "ubc_avg": 95 },
{ "ubc_dept": "nurs", "ubc_id": "424", "ubc_avg": 95 },
{ "ubc_dept": "nurs", "ubc_id": "424", "ubc_avg": 95 },
{ "ubc_dept": "musc", "ubc_id": "553", "ubc_avg": 95 },
{ "ubc_dept": "musc", "ubc_id": "553", "ubc_avg": 95 },
{ "ubc_dept": "musc", "ubc_id": "553", "ubc_avg": 95 },
{ "ubc_dept": "musc", "ubc_id": "553", "ubc_avg": 95 },
{ "ubc_dept": "musc", "ubc_id": "553", "ubc_avg": 95 },
{ "ubc_dept": "musc", "ubc_id": "553", "ubc_avg": 95 },
{ "ubc_dept": "mtrl", "ubc_id": "599", "ubc_avg": 95 },
{ "ubc_dept": "mtrl", "ubc_id": "564", "ubc_avg": 95 },
{ "ubc_dept": "mtrl", "ubc_id": "564", "ubc_avg": 95 },
{ "ubc_dept": "math", "ubc_id": "532", "ubc_avg": 95 },
{ "ubc_dept": "math", "ubc_id": "532", "ubc_avg": 95 },
{ "ubc_dept": "kin", "ubc_id": "500", "ubc_avg": 95 },
{ "ubc_dept": "kin", "ubc_id": "500", "ubc_avg": 95 },
{ "ubc_dept": "kin", "ubc_id": "499", "ubc_avg": 95 },
{ "ubc_dept": "epse", "ubc_id": "682", "ubc_avg": 95 },
{ "ubc_dept": "epse", "ubc_id": "682", "ubc_avg": 95 },
{ "ubc_dept": "epse", "ubc_id": "606", "ubc_avg": 95 },
{ "ubc_dept": "edcp", "ubc_id": "473", "ubc_avg": 95 },
{ "ubc_dept": "edcp", "ubc_id": "473", "ubc_avg": 95 },
{ "ubc_dept": "econ", "ubc_id": "516", "ubc_avg": 95 },
{ "ubc_dept": "econ", "ubc_id": "516", "ubc_avg": 95 },
{ "ubc_dept": "crwr", "ubc_id": "599", "ubc_avg": 95 },
{ "ubc_dept": "crwr", "ubc_id": "599", "ubc_avg": 95 },
{ "ubc_dept": "crwr", "ubc_id": "599", "ubc_avg": 95 },
{ "ubc_dept": "crwr", "ubc_id": "599", "ubc_avg": 95 },
{ "ubc_dept": "crwr", "ubc_id": "599", "ubc_avg": 95 },
{ "ubc_dept": "crwr", "ubc_id": "599", "ubc_avg": 95 },
{ "ubc_dept": "crwr", "ubc_id": "599", "ubc_avg": 95 },
{ "ubc_dept": "cpsc", "ubc_id": "589", "ubc_avg": 95 },
{ "ubc_dept": "cpsc", "ubc_id": "589", "ubc_avg": 95 },
{ "ubc_dept": "cnps", "ubc_id": "535", "ubc_avg": 95 },
{ "ubc_dept": "cnps", "ubc_id": "535", "ubc_avg": 95 },
{ "ubc_dept": "bmeg", "ubc_id": "597", "ubc_avg": 95 },
{ "ubc_dept": "bmeg", "ubc_id": "597", "ubc_avg": 95 },
{ "ubc_dept": "adhe", "ubc_id": "329", "ubc_avg": 96.11 }
]
The high-level API you must support is shown below; these declarations should be in your project in src/controller/IInsightFacade.ts.
/*
* This is the primary high-level API for the project. In this folder there should be:
* A class called InsightFacade, this should be in a file called InsightFacade.ts.
* You should not change this interface at all or the test suite will not work.
*/
export enum InsightDatasetKind {
Sections = "sections",
Rooms = "rooms",
}
export interface InsightDataset {
id: string;
kind: InsightDatasetKind;
numRows: number;
}
export interface InsightResult {
[key: string]: string | number;
}
export class InsightError extends Error {
constructor(message?: string) {
super(message);
Error.captureStackTrace(this, InsightError);
}
}
export class NotFoundError extends Error {
constructor(message?: string) {
super(message);
Error.captureStackTrace(this, NotFoundError);
}
}
export class ResultTooLargeError extends Error {
constructor(message?: string) {
super(message);
Error.captureStackTrace(this, ResultTooLargeError);
}
}
export interface IInsightFacade {
/**
* Add a dataset to insightUBC.
*
* @param id The id of the dataset being added.
* @param content The base64 content of the dataset. This content should be in the form of a serialized zip file.
* @param kind The kind of the dataset
*
* @return Promise <string[]>
*
* The promise should fulfill on a successful add, reject for any failures.
* The promise should fulfill with a string array,
* containing the ids of all currently added datasets upon a successful add.
* The promise should reject with an InsightError describing the error.
*
* An id is invalid if it contains an underscore, or is only whitespace characters.
* If id is the same as the id of an already added dataset, the dataset should be rejected and not saved.
*
* After receiving the dataset, it should be processed into a data structure of
* your design. The processed data structure should be persisted to disk; your
* system should be able to load this persisted value into memory for answering
* queries.
*
* Ultimately, a dataset must be added or loaded from disk before queries can
* be successfully answered.
*/
addDataset(id: string, content: string, kind: InsightDatasetKind): Promise<string[]>;
/**
* Remove a dataset from insightUBC.
*
* @param id The id of the dataset to remove.
*
* @return Promise <string>
*
* The promise should fulfill upon a successful removal, reject on any error.
* Attempting to remove a dataset that hasn't been added yet counts as an error.
*
* An id is invalid if it contains an underscore, or is only whitespace characters.
*
* The promise should fulfill with the id of the dataset that was removed.
* The promise should reject with a NotFoundError (if a valid id was not yet added)
* or an InsightError (invalid id or any other source of failure) describing the error.
*
* This will delete both disk and memory caches for the dataset for the id meaning
* that subsequent queries for that id should fail unless a new addDataset happens first.
*/
removeDataset(id: string): Promise<string>;
/**
* Perform a query on insightUBC.
*
* @param query The query to be performed.
*
* If a query is incorrectly formatted, references a dataset not added (in memory or on disk),
* or references multiple datasets, it should be rejected with an InsightError.
* If a query would return more than 5000 results, it should be rejected with a ResultTooLargeError.
*
* @return Promise <InsightResult[]>
*
* The promise should fulfill with an array of results.
* The promise should reject with an InsightError describing the error.
*/
performQuery(query: unknown): Promise<InsightResult[]>;
/**
* List all currently added datasets, their types, and number of rows.
*
* @return Promise <InsightDataset[]>
* The promise should fulfill an array of currently added InsightDatasets, and will only fulfill.
*/
listDatasets(): Promise<InsightDataset[]>;
}
More information about IInsightFacade file contents:
InsightDatasetKind is an enum specifying the two possible dataset types.
InsightDataset is an interface for a simple object storing metadata about an added dataset.
InsightError, NotFoundError, ResultTooLargeError are Error subtypes potentially returned by the API.
Important: InsightDataset and InsightResult interfaces should be treated as final! In other words, they should not be extended. (Sorry, Typescript does not have language support for this restriction) This kind of extension is unnecessary (those types are already quite general) and our assertions depend on the exact type signatures provided.
// Yes
const myDataset: InsightDataset = {
id: "foo",
kind: InsightDatasetKind.Sections,
numRows: 1
};
// No
class DatasetClass implements InsightDataset { ... }
const myDataset: DatasetClass = new DatasetClass(...);
Your implementation will need to handle "crashes". A crash occurs when a previously created InsightFacade instance is no longer "useable". Below is an example where a previously used instance becomes unusable, so a new instance is created.
const facade = new InsightFacade();
await facade.addDataset("ubc", dataset, InsightDatasetKind.Sections);
// ahhh a crash happened! Now we need to create a new instance!
const newInstance = new InsightFacade();
Note: There is no code to replicate a crash. When writing tests for crashes, you can simply create a new instance of InsightFacade as seen above.
After the instance "crashes", the user cannot use their old, crashed instance. If the user creates another insightUBC instance, the new instance should be in the previous state (aka have access to previously added datasets). The user should be able to manage and query datasets that were added prior to the crash. For example, when a user creates a new instance of InsightFacade, they should be able to query datasets added to "old" instances. All InsightFacade interface methods (addDataset, listDatasets, removeDataset and performQuery) should be able to handle crashes.
To handle the crash, you must write a copy of the valid datasets to disk. Disk behaves differently than memory. Disk persists between insightFacade instances but memory does not. For example, storing an array in a class variable, would be storing that array in memory (e.g. private arrayInMemory = ["a", "b", "c"];). Saving that array in a file, would be storing that array on disk.
The valid dataset files should be saved to the <PROJECT_DIR>/data directory, so the datasets can be read by new instances. The dataset files should be read and written using the fs-extra package. You can store the data on disk in whatever way that works for you. You just need to be able to (a) read it, and (b) remove it if removeDataset is called with the appropriate id. This can be a single file, a folder full of files, text file(s), json file(s), etc.
Make sure to not commit any files in <PROJECT_DIR>/data to git as this may cause unpredicted test failures.
You may assume that there is only one instance of InsightFacade running at any given time.
After your repository has been bootstrapped, you are ready to start building your test suite for the project! You will be writing unit tests for the four methods of the InsightFacade class. The tests you write for c0 must all be contained within the file test/controller/InsightFacade.spec.ts , this is what AutoTest will use to run your test suite. Ensure that your tests do not rely on your underlying implementation (aka reference any code in your src directory, as your src directory will be replaced).
Your goal is to build a test suite that completely covers the insightUBC Section's Specification.
The last step to initializing your repository is to:
Create a controller directory within your src directory.
Add the IInsightFacade.ts file within your controller directory (the contents of which is given in the above specification).
Create the InsightFacade.ts file within your controller directory. The contents of InsightFacade.ts must include an InsightFacade class that is the default export of the file and implements IInsightFacade.
Create the required spec file: test/controller/InsightFacade.spec.ts.
We will be using the Mocha Test Environment with Chai Expectations for testing.
Mocha is a framework for running your tests and provides a structure for your test suite, defining methods such as Before, describe and it. Whereas Chai is an assertion library that provides the assert statements to determine whether a test has passed or failed (like expect).
You may optionally use the chai-as-promised plugin to simplify assertions on PromiseLike objects. Chai-as-promised is necessary to use eventually in your assert statements. Some examples for testing async methods are included in the async cookbook.
To run your tests, we recommend adding the following to your package.json under scripts :
"test": "mocha --require ts-node/register --timeout 10000 test/**/*.spec.ts"
With this script, you can run your tests via the command yarn test.
Since InsightFacade is not yet implemented, any tests you add should FAIL!
More specifically, your tests should fail when you run them locally against an invalid implementation of InsightFacade. Note that in this project, tests pass by default if you don't include or check assertions in your tests. Therefore, you must make sure you have explicitly defined asserts for every code path in every test and that the asserts check the correct conditions (according to the provided spec).
You will write tests against the four methods of the InsightFacade API. The expected behaviour of these methods is outlined in the insightUBC Section's Specification. Before writing any tests, we recommend reading through the entire specification. Then, read through the specification a second time! On your second reading, write down a list of tests you should write to cover it. Think about both the successful and error cases (when a method should resolve or reject).
All the insightFacade API methods return promises, so be careful to handle promises properly in your tests. Mocha has documentation on writing tests with promises and specifically how you need to return every promise for Mocha to be aware of it. For some tests, this will require the use of promise chaining. For example, if you wanted to add a dataset and then remove it. Checkout the Async Cookbook and the example repo Addition Calculator on how to chain promises.
A great first test to write is the test that kills the free mutant mentioned below.
To test the addDataset method, you'll need to generate zip files with differing content. We recommend you place all of your test zip files in test/resources/archives.
The given dataset, pair.zip, is very large and has over 60k sections! Please use it sparingly as it will cause timeout issues. You will need it for at least one addDataset test and for your performQuery tests, but otherwise use it with care. You can create a smaller, valid dataset by deleting many files from pair.zip to drastically decrease the number of sections.
For addDataset, the content parameter is a base64 string representation of the zip file. To help with converting your zip files into base64 strings, we have created a utility file that you may choose to add to your project. The utility file also contains a method to remove the data directory, which should done after every test, so there is no state maintained between tests.
/test/resources/archives/TestUtil.ts
import * as fs from "fs-extra";
const persistDir = "./data";
/**
* Convert a file into a base64 string.
*
* @param name The name of the file to be converted.
*
* @return Promise A base 64 representation of the file
*/
const getContentFromArchives = (name: string): string =>
fs.readFileSync("test/resources/archives/" + name).toString("base64");
/**
* Removes all files within the persistDir.
*/
function clearDisk(): void {
fs.removeSync(persistDir);
}
export { getContentFromArchives, clearDisk }
After writing down all your tests, you may have noticed there are a lot of possible tests for the performQuery method, we have created two tools to help with writing and executing those tests: the Reference UI and folder-test.
There is a reference UI to help with generating query test results. The reference UI is necessary to determine what the expected result is for a query. For C0 only, the order of the results from performQuery in the reference UI will be identical to the order of the results returned by our implementation. This will not be the case in future checkpoints.
When writing tests for the performQuery method, you will find that the tests have a common structure: define a query to test and the corresponding results and check that performQuery returns the correct results for the given query. To simplify this process (and to ensure that InsightFacade.spec.ts file doesn't become cluttered), we have created an optional library called folder-test.
folder-test allows you to define test queries and results in separate files. These files are used to automatically generate tests that check whether the query returns the correct results. Thus, in addition to writing tests in InsightFacade.spec.ts using it(), you can also write tests for performQuery by creating new JSON files in a directory like test/resources/queries. As you add more valid JSON files to test/resources/queries you'll see that the number of tests that are run increases which is a sign that things are working as expected.
You can also feel free to make tests that don't use these provided structures when making your tests, for example if you think of a scenario that you don't feel is easy to test using our library.
The example repository, Addition Calculator, uses folder-test to test an asynchronous method so it might be a great place to start after reading the documentation.
As mentioned in the grading section, we will give you the first mutant for free! This will allow you to verify that everything is setup correctly to kill the mutants.
The first mutant is in the API method addDataset() and it causes addDataset() to accept an empty dataset id. For example, if a valid dataset is added with the id "" and the type "sections", it will be successfully added (Oh No! :-).
In the Resources section below, there is a section that does a walk through on how to kill this mutant.
We'd also like to drop a big hint about the location of another mutant. In the query EBNF there are a variety of valid FILTER's. If you write a test for each kind of filter, you should be able to kill another pesky mutant!
Have you read the Project Overview page yet? It contains an overview of the four project checkpoints and how each part will connect to build insightUBC.
A great first step would be to ensure that all the required tools are on your machine: git, node, yarn. Then, work through the Initializing Your Repository section, using git to commit your progress. Google as always is an excellent resource, as most things used in this course are fairly popular and well documented.
If you aren't familiar with promises and asynchronous code, it's worth taking the time to learn about it through one or both of the course provided resources: Async Cookbook and/or Promises Video.
When you are ready to start developing your test suite, we recommend starting with the walk through that explains how to kill the first mutant (the walk through is below). Then follow the hints regarding where the second mutant lives.
The following resources have been created by course staff to assist with the project.
Typescript: an introduction to TypeScript.
Promises: an introduction to promises and their asynchronous properties.
Git Cookbook: learn the basics of Git.
Async Cookbook: learn about promises and the differences between synchronous and asynchronous code.
Addition Calculator: a basic project that uses Typescript, Asynchronous Code (Promises), FolderTest, Mocha, Chai, Node/Yarn, Chai-as-Promised. It provides an example of how to develop and test an asynchronous method.
Addition Calculator: provides an example of how to use folder-test for an asynchronous method.
The free mutant:
The first mutant is in the API method addDataset() and it causes addDataset() to accept an empty dataset id. For example, if a valid dataset is added with the id "" and the type "sections", it will be successfully added (Oh No! :-).
Let's kill this mutant together!
This mutant is in the addDataset API method, which is defined in the IInsightFacade.ts file as:
addDataset(id: string, content: string, kind: InsightDatasetKind): Promise<string[]>;
Let's look at each part of a test and what we will require to kill the mutant: Setup, Execution and Validation.
The Setup contains any code required to get your application into the desired testable state.
The Execution contains your method under tests.
The Validation contains any asserts.
Setup
Do we need our InsightFacade instance to be in any specific state for this test? What method calls on InsightFacade (like addDataset, listDatasets etc) do we require to run this test?
For this test, we require no setup. We aren't relying on another dataset already being present or any other state for Execution, so we can move onto Execution.
Execution
For the execution, we need to call our method (addDataset)with the correct arguments. Let's look at each argument one by one.
id
As mentioned in the free mutant description, we want our our id to be an empty string, so "". The empty string is considered an invalid dataset id.
content
The content should be valid, since we only want this test to fail due to our invalid id, not because of any other invalid input.
Looking at IInsightFacade again, it describes the content as:
@param content The base64 content of the dataset. This content should be in the form of a serialized zip file.
Ok, so we want the base64 representation of a valid sections dataset zip file. A valid dataset can be found in the "Valid Content argument to addDataset" section. Once downloaded, we recommend adding it to the test/resources/archives directory as mentioned in the "Writing Your Tests" section.
Now we need to convert this valid sections dataset into a base64 string. Also in the "Writing Your Tests", you will find the /test/resources/archives/TestUtil.ts file. We can use the helper method getContentFromArchives to convert our zip file into a base64 string.
/**
* Convert a file into a base64 string.
*
* @param name The name of the file to be converted.
*
* @return Promise A base 64 representation of the file
*/
const getContentFromArchives = (name: string): string =>
fs.readFileSync("test/resources/archives/" + name).toString("base64");
}
We will convert the zip file into a base 64 string with:
const sections = getContentFromArchives("pair.zip");
kind
The kind should be valid and since we are adding a sections dataset, it should be InsightDatasetKind.Sections
Putting that all together, the Execution part of our test will look like:
const facade = new InsightFacade();
const sections = getContentFromArchives("pair.zip");
const result = facade.addDataset("", sections, InsightDatasetKind.Sections)
Validation
addDataset returns a promise, which means the result object in our Execution will be a promise. There are a variety of ways to write tests with promises. Checkout the Async Cookbook and the example repo Addition Calculator for examples.
We recommend using the chai-as-promised library. You will need to set this up before the asserts will work correctly. Look at the "Installation and Setup" part of their documentation.
We are expecting our addDataset to reject, since we are passing in invalid arguments (the id). For a rejecting promise, we will want to use the following assert:
expect(result).to.eventually.be.rejectedWith(InsightError);
The eventually keyword is chai-as-promised and is described in their documentation.
Mocha
Mocha is the test framework and provides the structure in which we should write our tests. In their documentation, they explain how to use Mocha with promises and the different useful methods (it, before). If we were to move our above Setup/Execution/Validation into a test, it would look like:
it ("should reject with an empty dataset id", function() {
const facade = new InsightFacade();
const sections = getContentFromArchives("pair.zip");
const result = facade.addDataset("", sections, InsightDatasetKind.Sections)
return expect(result).to.eventually.be.rejectedWith(InsightError);
});
Mocha requires all promises to be return ed. This is a very important step for handling promises. All promises must be returned so Mocha can wait for them to resolve or reject appropriately.
But... we can do much better! The facade and sections dataset will be used in other tests, so we can clean things up by using Mocha's before and beforeEach hooks. Let's also use some describes!
describe("InsightFacade", function() {
describe("addDataset", function() {
let sections: string;
let facade: InsightFacade;
before(function() {
sections = getContentFromArchives("pair.zip");
});
beforeEach(function() {
facade = new InsightFacade();
});
it ("should reject with an empty dataset id", function() {
const result = facade.addDataset("", sections, InsightDatasetKind.Sections)
return expect(result).to.eventually.be.rejectedWith(InsightError);
});
});
});
sections goes into the before because it acts like a constant and its state won't change between tests. However, facade's state will change since we'll be adding datasets and removing them, which is why we reinstantiate it between each test.
clearDisk()
In Section 5.5 Handling Crashes, we mention how datasets will be written to disk so they can be persisted across InsightFacade instances. Let's look at the following test suite together:
describe("InsightFacade", function() {
describe("addDataset", function() {
let sections: string;
let facade: InsightFacade;
before(function() {
sections = getContentFromArchives("pair.zip");
});
beforeEach(function() {
facade = new InsightFacade();
});
it ("should successfully add a dataset (first)", function() {
const result = facade.addDataset("ubc", sections, InsightDatasetKind.Sections)
return expect(result).to.eventually.have.members(["ubc"]);
});
it ("should successfully add a dataset (second)", function() {
const result = facade.addDataset("ubc", sections, InsightDatasetKind.Sections)
return expect(result).to.eventually.have.members(["ubc"]);
});
});
});
Notice - we've got the same test duplicated - will they both pass? They would if there is no state carried between the tests. But, in Section 5.5, we state:
When a user creates a new instance of InsightFacade, they should be able to query the datasets existing on disk...
After the first test, the dataset "ubc" will be written to disk. When we create another instance of InsightFacade, it will see the "ubc" dataset on disk. Then, when the second test adds the dataset "ubc", it will be rejected since it can see that the dataset "ubc" has already been added. The first test will pass and the second test will fail.
We need to remove the dataset(s) from disk between tests, so each test starts with a fresh state. You may have noticed that /test/resources/archives/TestUtil.ts had an additional helper method, clearDisk(). Just like we need to reinstantiate facade between tests, we should also be clearing the disk. A call to clearDisk() should be added BEFORE creating a new InsightFacade instance. The beforeEach should look like:
beforeEach(function() {
clearDisk();
facade = new InsightFacade();
});
To ensure that the correct members of the API are defined and exported correctly, we will add a file to your repo which imports the API before commencing the build. This file as is follows
// Run from `src/Main.ts`
import * as fs from "fs-extra";
import * as zip from "jszip";
import {
IInsightFacade,
InsightDatasetKind,
NotFoundError,
ResultTooLargeError,
InsightError,
InsightDataset,
InsightResult,
} from "./controller/IInsightFacade";
import InsightFacade from "./controller/InsightFacade";
const insightFacade: IInsightFacade = new InsightFacade();
const futureRows: Promise<InsightResult[]> = insightFacade.performQuery({});
const futureRemovedId: Promise<string> = insightFacade.removeDataset("foo");
const futureInsightDatasets: Promise<InsightDataset[]> = insightFacade.listDatasets();
const futureAddedIds: Promise<string[]> = insightFacade.addDataset("bar", "baz", InsightDatasetKind.Sections);
futureInsightDatasets.then((insightDatasets) => {
const {id, numRows, kind} = insightDatasets[0];
});
const errors: Error[] = [
new ResultTooLargeError("foo"),
new NotFoundError("bar"),
new InsightError("baz"),
];
After Checkpoint 0, we will be introducing ESLint. To ensure your test files comply with the linter, you may need to refactor the source files you write during the Checkpoint 0 development period.
The best way to get ahead and reduce the refactoring you may need to do later is to write clean and readable code. This is what a linter is meant to assist with.
Getting ESLint running will require installing the eslint package, a few packages to make it compatible with TypeScript, and defining an .eslintrc.js.