Challenge – A single challenge that forms part of the competition.
Competition – referring to the entire 42-challenge competition
Student – a single student participating in the challenge.
Teacher – a teacher that is registered as a teacher in the competition. Teachers can also review challenges.
Submission – A submission made by a single student for a specific challenge.
Review – A review done by a single reviewer on a single submission. A submission usually has multiple reviews.
Peer Review – A review completed by a student.
Teacher Review – A review completed by a teacher.
Reviewer – a student other than the student that made the submission. The reviewer could also be a teacher.
Credibility – a score describing the credibility of a reviewer (0-1 for students and 0-2 for teachers)
Moderation – Moderation is done by a Moderator or System Administrator. The moderated score is the final score.
Peer review points – Competition points that students obtain by submitting fair peer reviews. These points count towards their total competition score.
Teacher review points – Competition points that teachers obtain by submitting fair reviews. These points count towards their school total.
Figure 1 below describes the main algorithm overview, from the moment a new review is submitted. It can be observed that reviews can be received on challenges that are already finalised or challenges that are not yet finalised.
Reviews
When a student completes a submission, the submission enters the ‘peer review queue’.
Students, Teachers and Moderators can all review submissions, but only students and teachers are awarded review points for doing so.
Students, Teachers, Reviewers and Moderators all have credibility ratings assigned to them.
Moderators can also conduct reviews, but their credibility is fixed, whereas that of students, teachers and reviewers are variable.
The credibility rating scale for students spans between 0-1, but for teachers and reviewers spans between 0-2.
The starting credibility rating for teachers can be set higher than that of students.
When a teacher or moderator reviews a submission, the process is identical to when a student does the review, except that moderators can’t earn peer review points.
When Students, Teachers and Moderators complete a peer review, this is logged as a ‘review score’ in the system. When the system receives enough fair review scores, it will assign a ‘system score’ which will become the finalised score for that submission.
Reported Submissions
When Students, Teachers and Moderators review a submission, they have the option to report a submission as inappropriate, which will send the submission to the ‘reported submissions queue’.
Reported submissions can be viewed by Moderators, Super-Admins and Teachers who are affiliated with the student who submitted the associated submission.
When Moderators, Super-Admins and affiliated Teachers view a reported submission, they can either dismiss the report and send the submission back to the ‘peer review queue’, or confirm that the submission is inappropriate, which will automatically assign the submission an ‘inappropriate score’ of zero in the system.
If any ‘peer review scores’ are already assigned to this submission when it is assigned an ‘inappropriate score’, these ‘peer review scores’ are dismissed, and any associated reviewers’ credibility ratings are unaffected.
Moderations
When a submission cannot be finalised by the system, either in a certain amount of time or within a certain number of reviews (whichever comes first), the submission enters the ‘moderation queue’.
Only Moderators and Super-Admins can moderate submissions as a default - not teachers or students.
On occasion, the most trusted teachers can be given moderation capabilities. When this is the case, they will only be able to moderate submissions assigned to those outside of their school.
When a Moderator or Super-Admin moderates a submission (different to a normal review), the mark he/she assigns is logged in the system as a ‘moderator score’, which will become the finalised score for that submission.
All reviewers that reviewed the associated submission will have their credibility ratings updated according to this ‘moderator score’.
Remark Requests
If a student is unhappy with the ‘system score’ they received as a result of the peer review system, they have the option to request a remark, and the submission enters the ‘remark request queue’.
Only Moderators and Super-Admins can remark submissions as a default - not teachers or students.
On occasion, the most trusted teachers can be given remarking capabilities. When this is the case, they will only be able to remark submissions assigned to those outside of their school.
When a Moderator or Super-Admin remarks a submission (different to a normal review or moderation), they can either dismiss the request and retain the ‘system score’, or they can assign a new score which is logged in the system as a ‘remark score’ and becomes the new finalised score for that submission.
All reviewers that reviewed the associated submission will have their credibility ratings updated according to this ‘remark score’.
Students cannot request a remark if the submission has already been assigned a ‘remarked score’, ‘moderator score’ or ‘admin score’ or has been marked as an inappropriate submission.
Appeals
If a reviewer is unhappy with the perceived ‘fairness’ of their ‘peer review score’ (and resulting change in their credibility weighting) when a ‘system score’ is calculated by the system for a submission they have marked, they have the option to ‘appeal’ this mark, and the submission enters the ‘appeal queue’.
Only Moderators and Super-Admins can view ‘appeals’ as a default - not teachers or students.
On occasion, the most trusted teachers can be given appeal capabilities. When this is the case, they will only be able to view appeals requested by those outside of their school.
When a Moderator or Super-Admin views an appeal request (different to a normal review or moderation), they can either dismiss the request and retain the ‘system score’, they can assign a new score which is logged in the system as an ‘appeal score’ and becomes the new finalised score for that submission, or they can reassign the mark of the appealing reviewer as a fair review, which will trigger a re-calculation of that reviewers credibility rating.
All reviewers that reviewed the associated submission will have their credibility ratings updated according to this ‘remark score’.
Reviewers cannot appeal if the associated submission has already been assigned a ‘remarked score’, ‘moderator score’ or ‘admin score’, has been marked as an inappropriate submission, or has already had an appeal logged against it.
Super-Admin Moderations
In addition to clearing moderations, remark requests, reported submissions and appeals, a Super-Admin can also access any submission in the system and assign an ‘admin score’, which will overrule (not overwrite) any ‘system score’, ‘inappropriate score’, ‘remarked score’, ‘appeal score’ or ‘moderator score’ already assigned to that submission and become the finalised score for that submission.
All reviewers that reviewed the associated submission will have their credibility ratings updated according to this ‘admin score’.
When ‘system scores’, ‘remarked scores’, ‘appeal scores’, ‘inappropriate scores’ or ‘moderator scores’ are overruled, they should not be overwritten. Instead, a record should be kept of all scores logged in association with that submission, but the highest-ranking score in the workflow will always equate to the final score for a submission.
Figure 2 below shows the pseudo-code that describes the updating of a reviewer’s credibility and awarding review points. If the reviewer’s score is within a narrow band around the final score, the reviewer’s credibility is adjusted upwards. If the reviewer’s score is outside a wider band around the final score, the reviewer’s credibility is adjusted downward.
Figure 2: Pseudo-code for updating a reviewer’s credibility and granting peer review points
Peer review points are awarded every time a student submits a fair review. They will only be awarded for the first number of reviews for every submission the student has made. If a student has many reviews after only a single submission, it does not take away the ability to earn peer review points for the reviews after his successive submissions. These peer review points are awarded for both mandatory and discretionary reviews. The peerReviewTriesPerSub parameter is the amount of tries a student gets in earning peer review points for each submission he makes himself. This value could be set slightly higher than the amount of mandatory reviews that have to be done per submission in order to encourage discretionary reviews.
(peerReviewTriesPerSub=5)
A student submits his first challenge. Now he has a chance to earn peer review points on his five next reviews. Let’s say he only does four reviews and earns points for some of them. Now he submits his next challenge. He now has the ability to earn points for his six next reviews. Say he does the six reviews and about twenty more voluntary reviews. He can only earn points for the six reviews. Now he does his third submission. He now has the ability to earn points for the next five submissions. In short, if he didn’t use up his chances to earn points on reviews, it is carried over to the next phase (after his next submission). If he uses up all his chances to score points, doing further reviews will have no effect on his peer review points. When he submits another challenge, he is awarded with the opportunity to score peer review points for the next five reviews.
Teachers are able to score review points for themselves (counting towards their school total). As with the students, it is important to discourage unfair reviews. This is done by limiting the amount of reviews a teacher has the opportunity to score review points on. This limitation is implemented in the form of reviews per week.
Figure 3 shows the pseudo-code for determining if a submission can be finalised. Note that a flat (non-weighted) average and flat standard deviation is used to update reviewer credibilities. This will prevent the “self-fulfilling prophecy” pitfall. The final score is a weighted average of different reviewer’s credibilities and their rated scores. The submission standard deviation is stored in order to adjust future reviewer credibilities.
When a submission has enough reviews (five or more), some outliers can be ignored for purposes of determining the standard deviation, flat_average and final score. This is done to help minimise the amount of submissions sent for moderation. Table 2 below describes how many reviews are necessary to ignore a certain amount of reviews.
Table 2:
Figure 3: Pseudo-code describing the process of finalising a submission.
Figure 4 shows the pseudo-code for the moderation process. The moderator score is used as both the flat_average and final_score. The submission standard deviation is set to be the constant stdDevThresholdToFinalise.
The moderation process is kicked off as described in figure 1. It is also kicked off when the submission is not finalised within a specified time limit from submitting the submission (limit = maxTimeTillFinalise).
No review points are awarded for completing a moderation and the moderator’s credibility is not updated.
Figure 4: Pseudo-code for moderation of a submission
There are several tuning parameters (variables) built into the Peer Review algorithm which allows the super-admin to tweak the outputs of the algorithm based on observed user behaviour. For more information on the tuning parameters for the peer review algorithm and suggested default values for these parameters, visit the "Peer Review Configurations" page on confluence.
Audit Results (Post Kenya 2021)
Recommendations & Resulting Upgrades (Ahead of Rwanda 2021, Ed 1)