Ongoing Audits
Identifying Incentive/Prize Winners
Weekly Moderation
Reported Submissions (Keep Log of Offenders > Protocol In T&Cs)
Remark Requests
Clearing the Moderation Queue
Audits/Data Integrity (Recommendations >Red Flag/Patterned Behaviour Scoped)
1-3 are things we've done in the past. 4 covers things we've discussed and I feel would be good practice, but have never materialised in the past.
1. Recurring Data Validations
Learners without schools
Manage school join/change requests
Manage school suggestions (learner and teacher)
Identify schools with learners but no teachers
Identify duplicate accounts (delete/merge)
2. Monitor Peer Review Backlog
Monitor unfinalized submissions backlog (and tweak algorithm/intervene if issue)
Monitor flagged submissions backlog (and intervene if issue)
Monitor remark request backlog (and intervene if issue)
Monitor key algorithm indicators (and tweak if issue)
3. Managing Suspensions/Permissions
Suspending/reinstating user permissions (who demonstrate bad behaviour)
4. Bad Behaviour Trend Moderations (new processes)
Credibility score
Number of reviews
Reviews in short space of time (or regularly hitting thresholds)
Multiple reviews with minimal review time
Recurring flagged submissions (learner)
Lost points (and how)
Recurring remark requests (marker)
Same score in a continuous sequence
Significant difference of moderated <> finalised scores
Standard deviation value for reviews linked to a single submission is very low
Total points vs reasonably expected/possible points
Useful Link
https://sites.google.com/jasiri.org/wavumbuziplaybook/home/product/user-management