Last updated: 06.12.2021 Editor: David Jenkins
The way in which the system is currently designed means that for every new instance of the challenge system there are several sets of tasks that need to be completed either just before or immediately after the associated challenge/iteration comes to an end. These tasks can be grouped into the following functional areas:
When: Before challenge ends
Who: Dev Team
To ensure that challenge submissions close (challenge ends) at the correct time on the correct day, the development team will need to ensure that the correct date and time is configured in the code for each iteration.
When: When final moderations are complete
Who: System Admin
Should you want to issue digital certificates for learners and teachers who participated in a specific iteration, these will need to be configured in the super-admin portal (Modules > App Settings > Group Name “Certificate”). The current configuration criteria and instructions for certificate design is documented in the Certificates page in Confluence.
The design of the certificates is flexible, in that multiple elements present within the system (e.g. learner name, affiliated school, badges earned, total points, skills earned, etc.) can be configured to display on top of the certificate design. The current certificate is only configured to display the "learner name" as an overlay to the certificate design. To amend this configuration would require developer intervention at a code level.
When printing certificates from the system, it is important that teachers/learners print the certificates directly as PDF (rather than attempting to download the certificate as an image and printing the image), as this is the only way in which the certificate will print with the overlay elements displayed.
When: Typically one week prior to the challenge ending
Who: System Admin
To build suspense at the end of the challenge period, to allow for manual adjustments from system moderations to accurately reflect and to keep the top prize winners anonymous until the awards event, the leaderboards are typically hidden during the final week of the challenge. This can be configured in the super-admin portal via Modules > App Settings > Group Name “Leaderboard”.
When: Before the challenge ends
Who: System Admin (source graphics) and Dev Team (deploy graphics)
When the challenge end date criteria is met, the challenge gameworld will be automatically overlaid by a set of overlay graphics which are designed to prevent further challenge submissions (accessed via the gameworld) and direct users to complete certain tasks which can earn them bonus points (e.g. complete endline surveys, discretionary reviews, etc.). These overlay graphics need to be provided to the dev team team to deploy before the challenge ends.
These should be provided in PNG format for mobile (1080 x 1920px) and desktop (1920 x 1080px).
When: After the challenge ends
Who: System Admin
At the end of each iteration there are usually several submissions that sit in the queues listed below which require a finalised score before the challenge data can be considered complete. It is the responsibility of the system admin to ensure that all submissions in the below queues receive a finalised score
Reported submissions queue (accessible via super-admin portal > modules > reported submissions)
Remark requests queue (accessible via super-admin portal > modules > remark requests)
Moderation queue (accessible via super-admin portal > moderations tab > moderate a challenge)
To assist with this task, the system admin typically employs several other stakeholders (e.g. ambassadors) to support him/her, depending on the number of submissions present in the above mentioned queues. It is also possible for learners and teachers to continue contributing to the finalisation of scores after the challenge has ended.
When: After unfinalised challenges have been cleared
Who: System Admin
To have greater confidence as to the integrity of the challenge data and submission scores, it is recommended that an exploratory analysis is run on the final data set to identify any patterns which might indicate undesirable behaviour.
Any cases of bad behaviour identified in this analysis should be investigated and manual adjustments should be made in the system (by the dev team) to reflect any points reductions, suspensions, etc.
The final moderation process has been run at a micro-level by the QA team in previous iterations. A limitation of this approach is that it can undermine the scoring process that is built into the challenge system. It is recommended that in the future the M&E team run exploratory analyses at a meta-level. The approach to this exploratory analysis would need to be scoped by the M&E team.
When: When final moderations are complete
Who: Dev Team
To ensure the challenge data for a specific iteration is backed up, it is advised that the dev team export all data relating to the associated instance of the challenge platform and store it in a secure repository. This data can be used to backup corrupt data, serve as source data for longitudinal research, etc.)