In my undergraduate teaching I make use of occasional homework assignments, weekly tutorial tests, as well as scheduled class tests to formally assess my students. Each of these are discussed below. Informally I make use of pop-quizzes at random intervals during lectures to assess whether students are up to date with their work.
The purpose of these assignments is for students to revise concepts that they have previously engaged with and are developed as the need for these assessments arise. For example, students are given a mathematics homework assignment to revise matrix algebra before applying the results in a statistical context (↪ view). A further example is additional regression exercises to ensure students can carry out regression analysis using matrices (↪ view). These assignments are marked and make it possible for me to identify any problem areas that require re-explanation before continuing with new work.
In the first few years of lecturing STA221 a large difference between the students’ continuous assessment marks and their final marks was observed. It was then decided to not assess the weekly out-of-class tutorial questions, but to rather use weekly tutorial tests instead. In 2016 I was appointed as module coordinator and I continued with this approach (↪ view). At the end of 2016 the differences between the students’ continuous assessment marks and their final marks were much smaller and thus the pass rate much improved. Below the pass rates from 2013 to 2019:
2013: 54%
2014: 59%
2016: 76%
2017: 89%
2018: 82%
2019: 94%
The nature of the undergraduate modules I teach and have taught require the use of 100% written tests to assess the student's reasoning ability, writing ability and understanding of the questions as well as the taught statistical concepts. I attempt to make use of practical scenarios when setting part of the test questions. This exposes the students to real life applications and gives them the chance to think about how they are going to apply their textbook knowledge to solve these problems (↪ view). The other part of the test questions are devoted to assessing whether students understand the theory and mathematics behind the statistical concepts. When developing students' understanding of statistics it is necessary, at some point, to prove certain theoretical results. The typical way of assessing these proofs is by asking students to complete the proof in a test. I realized that some students have the ability to memorize a proof while not understanding how the result of the proof is reached. To address this problem I started making use of comprehension style questions in 2014 where I give extracts of the proofs to the students and then ask them questions about the extracts. For an example of these style questions ↪ view .
In 2013 the STA211 and STA221 courses consisted of four class tests of which the best three test marks counted towards the student's semester mark. This method was followed to compensate for the absence of a formal sick test. However, after careful inspection it was found that students complete the course with disproportionately high class test averages. The approach was changed in 2014 to 3 class tests followed by a sick test for which students only qualify once they have provided legal proof of their absence during any of the class tests. All three tests count equally towards the student's class test average (↪ view). In 2017 the approach was changed again for STA221. Each of the three class tests are immediately followed by a sick test covering the same scope as the test. This way all students are assessed on the entire scope of the module (↪ view).
I also mark these tests and use them as a method of gauging the level of my students and identifying areas that students struggle with. Their biggest stumbling block is not being able to understand what they are reading nor what they are asked, a direct consequence of a pre-existing language barrier. The students receive the complete test memo and after marking their papers, I decide which questions were problematic and I discuss these with the students in class. Feedback is given one week after testing and the students also sign for their tests on a class list containing all previous test marks giving them a picture of their progress as a whole.
In general I follow the structure of assignments and tests to assess my postgraduate modules, but the way these are set up are module outcome dependent. Details to follow.
I tailor the structure, content and frequency of assignments towards the particular student cohort and module schedule. For example, full-time students versus part-time mostly employed students and traditional versus block lectures.
In the case of BIA713, where the module used to be presented in a block structure at the time I was lecturing it, we had part-time employed students working in companies and the assignments were set in such a way that they all address the same business problem. The objective of this was to explain the process of data analysis for solving business problems to a student cohort with little to no analytics experience. In 2017 the business problem was addressed in two assignments (↪ view). The first assignment, made available after the first block of lectures, introduced the business problem and gave the students access to the data. The assignment then covered the scope of the content covered in that block. The second assignment continued with solving the business problem, but now using the content covered in the second block of lectures. This "block-lecturing followed by assignment" structure was specifically used due to the way the module presentation was structured. The assignments were quite lengthy to ensure that students keep working during the time between lectures. The business problem was retained between assignments to keep students engaged and the continuity assists part-time students to focus on the new type of analysis rather than trying to understand a new scenario as well.
My approach to assignments for the Honours Statistical Science modules is somewhat different and depends on whether I am the theory and practical lecturer or just the practical demonstrator.
Students completing my Honours modules need to be confident problem solvers using statistical software. They need to understand how to match the taught topics to the problems that can be solved with the topic in question, and they need to turn their output into actionable and relevant insights. These are the attributes that Statistics graduates are expected to now have when they transition from university to industry. They even require these skills should they want to further their studies by joining the Data Science Masters programme.
When I am the practical demonstrator, my assignments are designed to mimic the process a student would go through when they use a particular technique to solve a practical problem. Examples of such assignments are included below. The examples also demonstrate how I adapt my assignments annually depending on the programme structure (weekly lectures or block teaching) or according to what I learned the previous year through research and supervision.
For modules where I am both the theory and practical lecturer, the assignments need to develop theoretical understanding while still making sure the students expand on their software application and interpretation skills.
When I block teach, the assignments are shorter to fit into the time allocated per block and to avoid pulling focus from other modules when they are presented in consecutive blocks. When following a classic weekly lecture structure, the assignments are designed to be worked on over two weeks, giving students the opportunity to digest new content and work systematically through the new concepts. This also gives me the opportunity to set up an assignment that would take the students through a mini project of the topic (↪ view).
Statistics graduates are now expected to develop a relevant technology stack. Our postgraduate students receive training in Python and SAS in other modules. I train them in using RStudio. This training goes beyond coding in base R. Students now use R Markdown to compile their assignments (↪ view) giving them the opportunity to practice being more efficient programmers and me the chance to monitor their progress.
To further enhance the learning experience, I am exploring the integration of GitHub Copilot for RStudio in 2026, as mentioned under Curriculum Development. This will mean that, going forward, practical assignments will require a new format to continue assessing students' personal learning and knowledge gain separate from the AI. One idea would be to generate the Copilot solution for a question, alter the solution and then ask the students to identify the errors and correct them.
In order to assess the students' software skills and ability to extract relevant insight from output, while still being able to sufficiently assess theoretical understanding, I now give the students mini projects to complete for which reports need to be prepared. The formal exams thus do not include any practical application, they just test the theory and whether the students can accurately interpret provided output (↪ view). This way I can assess not only the students' ability to memorise a concept, but their understanding of how an analytics problem should be approached. In addition, their software and report writing skills are assessed through the mini project (↪ view).
Students are now becoming more skilled at using AI. The project assessment activity will have to be adjusted to account for this. In 2025, to partially address the problem, the project instructions have been tightened to limit the students to only using analyses that I demonstrated to them. They are also only permitted to use functions that I included in my course readers (↪ view). I realise that this hardly helps, but it was the best I came up with this year. Further research on assessment in the time of AI will be required.
To assess how well their R Markdown skills have developed, the project report now also needs to be prepared using this technology (↪ view). The marking rubric used to assess the report is included in the project descriptor to ensure students know what is expected of them (↪ view).
The evidence presented here demonstrates that I am a thoughtful assessment designer. I make an effort to ensure that the skills I believe I instill in a module, are in fact carried over to the students. It also shows that I assess my Statistics graduates in ways that are relevant to their skills development. There is evidence that I adjust my assessment strategies when necessary and that I maintain awareness of global changes, referring here to the impact of AI, that need to be considered when designing assessments.