Last updated: 06.12.2021 Editor: David Jenkins
The way in which the system is currently designed means that a new instance of the challenge system needs to de deployed for every new challenge/iteration that is run in each country. Certain measures were implemented to optimise this setup process (for more details, see the Jan - Mar 2021 sprint report in the Sprint Cycle section). Nevertheless, there are still several sets of tasks that need to be completed for a new iteration to go live - these tasks can be grouped into four functional areas across the Wavumbuzi team:
Technical setup of the challenge system
Configuration of the challenge system
Configuration of the CRM and other supporting systems
Take-to-market preparation
M&E prep
The sections below outline the tasks that need to be completed within each of the functional areas detailed above before a new iteration can go live.
Central to the deployment of a new instance of the challenge system is the technical configurations required to have a unique staging and production environment for the new instance. This setup includes a variety of tasks, from configuring for new instances of the learner and teacher portals within the AWS infrastructure, to updating system white labels across deployments in different countries. These tasks require technical knowledge, and therefore can only be completed by a member of the development team.
A detailed checklist of the steps involved in the technical setup of the challenge system can be found in the link provided. This checklist is stored in Confluence and it is the responsibility of the lead developer for the project (currently Ruti Yannick) to update it on a regular basis.
It is suggested that the upcoming sprint cycle (Jan-April 2022) should focus on the amendment of the underlying logic which requires the deployment of new instances of the challenge system for each new iteration of the challenge. Instead, it is recommended that the challenge system is configured to run as a single instance which houses multiple competitions (representing each new iteration of the challenge). This new configuration would offer multiple benefits, from allowing single sign-in for users to improved efficiencies in system scalability. The details for what is required to achieve this in the next sprint cycle can be found in this scoping document.
Once the technical setup is complete for the deployment of a new instance of the challenge system, there are several configurations that need to be made to the system for the specific context of that iteration (based on special requirements, lessons learned from previous iterations, etc.). These configurations are varied in nature, from providing updated white label documents, to configuring parameters for the peer review algorithm, or uploading challenge content. These tasks require a good knowledge of the super-admin portal and form a key part of testing, and therefore are typically configured by the Product Owner and reviewed by a member of the QA team.
A detailed checklist of the steps involved in the configuration of the challenge system can be found in the link provided. This checklist is stored in Confluence and it is the responsibility of the lead developer for the project (currently Ruti Yannick) to update it on a regular basis.
Instructions relating to several of the configurations listed in the "configuration of the challenge system" checklist can be found in the Product section of this playbook.
For more information on the setup checklist tasks relating to the CRM and other supporting systems, refer to the New Instance Setup page in the CRM & Systems section.
For more information on the setup checklist tasks relating to take-to-market preparations, refer to the Pre-Campaign Planning page in the Marketing & Communications section.
Academic
Operational (BI)
Growth Hacking