If you copy any portion of the project (even part of a single checkpoint), you will receive 0 on the entire project, or your course grade will be capped at 50%. See the No Copying Policy page for details and licensing.
The project is broken into 4 deadlines that correspond to checkpoints: Checkpoint 0 (C0), Checkpoint 1 (C1), Checkpoint 2 (C2) and Checkpoint 3 (C3).
Each Checkpoint is worth an equal amount (25% of the project grade). It is calculated as follows:
Project Grade =
( (C0 grade + C1 grade + C2 grade + C3 grade) / 4 )
- (Scrum Attendance Penalty)
During C1-C3, this is Mandatory
Each week during C1-C3, you and your teammate will have a Scrum meeting with your assigned mentor TA. This will be a 15 minute check-in where you state what you have done and ask any questions, and express any concerns about progress.
As weekly Scrum meetings are mandatory to help keep your team on track, failure to attend will result in a 10% deduction from your final project grade for each missed meeting. We will not penalize absences due to extenuating circumstances (illness, doctors appointments, etc.) if you create a Github Issue on your team's repository and post it before Scrum. The issue needs to answer the following questions: (1) What work you did last week? (2) What work you will do in the following week? (3) Are you encountering any issues/concerns you would like to bring up?
Each checkpoint will be assessed according to the Bucket Grading scheme described below.
You can ask for help on Piazza, in office hours, or in your weekly lab meetings with your mentor TA.
If you are not contributing equitably to the technical components of the project, we may use grade adjustments to decrease your grade to reflect your lower contribution. This is determined on a case by case basis by the instructional staff, usually after the project is complete.
In software engineering, we care about creating holistic software. This means that truly top-notch software is not just the functionality of the code itself—it should also include documentation, processes, and collaboration artifacts. This is why aspects of the project are evaluated using AutoTest for the code artifact and Project Journals (described on the learning areas page) for the reflective aspects of the project.
While you are developing your code, you are expected to be building documentation and/or collaboration artifacts along the way, some of which you will turn in as a part of your final submission. Your group will receive feedback on the general quality of your code via AutoTest during the checkpoint sprint, while your non-code artifacts will be submitted by the deadline individually and assessed in the lab immediately following the deadline via a discussion with your TA.
Your code artifact quality will fall into one of the following buckets:
Beginning - Artifacts have just been started, or are otherwise incomplete.
Acquiring - Artifacts demonstrate support for basic functionality, and have some documentation or have been built somewhat collaboratively.
Developing - Artifacts demonstrate support for basic and complex functionality, and have solid documentation and have been built collaboratively.
Proficient - Artifacts demonstrate strong support for basic and complex functionality, and are robust to edge cases. They have solid documentation and have been built collaboratively.
There are a few reasons why the project grading scheme is different from grading schemes you may be used to:
Don’t focus on individual test cases: A typical autograder is punitive – meaning one failed test case means a lower grade received. This is counter to our belief that software quality should be assessed as a whole. Proficiency over perfection.
Leniency for autograded components: Automated tests are inherently strict and rigid in their format. In contrast, the spec may have some ambiguity in interpretation, and each group may come up with a very different solution to the same problem. To account for this dissonance we should strive to give leniency wherever reasonable. Our thresholds reflect this value.
Better autograded feedback: We are giving additional and more detailed feedback than previously. This is meant to help you reach proficiency faster and give you a stopping point so you can allocate your time elsewhere.
Of course, we must report your final grade numerically. For each checkpoint your autograded component will be assigned a bucket based on its quality. Achieving a bucket for a component means receiving an associated percentage score for that component.
Code Artifact Buckets:
Beginning - 0%
Acquiring - 55%
Developing - 75%
Proficient - 100%
Despite this, we encourage you to think about your submission holistically in terms of the qualitative bucket names and definitions as opposed to percentage grades.
Here are some example grade calculations:
You start 2 days before the deadline and can’t get a working solution in (passing just 34% of private test cases). To the right is an example of your Autotest feedback when running @310-bot #c1.
Your final grade is 0%. As a whole, your submission can be thought of as in the Beginning bucket, since the code artifacts are “just started or otherwise incomplete”.
In classy, this will be shown as:
c1 = 0.0
This time, you started early, and have a mostly-working solution at the deadline. However, you can’t quite get over this one really nasty bug, which causes you to fail several roomQueries tests. To the right is an example of your Autotest feedback when running @310-bot #c2 :
As a whole, your submission can be thought of as in the Developing bucket, since your code artifacts support most functionality–apart from that nasty bug which prevents it from being Proficient!
In classy, this will be shown as:
c2 = 75.0
This time, you and your partner have mastered your collaboration, and have a quality solution that passes a large number (but not all) of the private tests. To the right is an example of your Autotest feedback when running @310-bot #c3:
In classy, this will be shown as:
c3 = 100.0
*You may notice that you received full marks on the assignment despite not passing all of the test cases - this is intentional. We care about holistic software quality instead of incremental gains per test case, and a Proficient solution by our definition does NOT need to pass 100% of the private tests (as it might in other grading schemes).