This standard is worth 2 credits. Your evaluation can be carried out via a report, poster or series of videos evaluating a chosen statistical report.
Knowing who funded the research can help you to understand why it was created.
The audience of a report are the individuals or groups who might be interested in the context it explores (who cares?). Individuals, organisations, iwi, businesses, and community groups may be able to use the information to influence decision making.
One way to identify who might use the information in a statistical report is to do an internet search to find out who is already sharing or quoting it!
The most common meaning of the word population is the number of people who live in a particular place. The word population can also mean a group of people who have something in common, for example, New Zealanders who live in rural areas.
In statistics, the word population has a much broader meaning. It means an entire group of people, animals, plants, or things that we want to learn about or describe. So the target population you need to identify in your assessment task needs to be the particular group of people that the report is about.
Throughout 2MAS we look to draw conclusions for a whole target population from a sample. It’s important to understand that the sample is just a smaller part of a whole.
Most of the time, statisticians will investigate a sample of the population instead. It is important that a sample is fair. Everyone in the population should have the same chance of being selected to be in the sample.
There are four key groups we should think about, starting with the largest.
1) The TARGET POPULATION. This is everyone in the place or group being investigated. It could be restricted to a certain gender, age group, location or ethnicity.
2) The SAMPLE FRAME. These are the members of the population that we are able to contact. We would hope that this is most of the population; but there are some members of the population who are not in the sample frame. Sometimes, the sample frame is carefully chosen in a way we hope is representative of the wider population.
3) The SAMPLE. These are the people who we select to try to collect data about. They are a smaller group of people, chosen from the sample frame.
4) The RESPONDENTS. We took a sample from the sample frame to represent the target population. However, for various reasons, not everyone in the sample will respond.
How many people were asked, and is this enough? The larger the sample size, the more reliable the statistics.
For political polls, with a few simple questions, the sample size is usually around 1000 respondents.
For more complicated surveys, a statistician might decide to survey fewer people, due to cost or time constraints.
The size of each subgroup being considered needs to be reasonable too.
CensusAtSchool - Get Registration Code from your teacher
Bad questionnaire - Collecting Student Information
Bad questionnaire - Ice Cream
What did you learn about questionnaire writing from the above?
Understandable: Are the questions easy to answer? Are the questions written in plain English, or is there technical terminology involved?
Anonymous: Have the respondents been assured that their responses are anonymous? Do they know who will have access to their data?
Sensitive: Are the questions sensitive or embarrassing? Are the questions asked in a non-judgemental way?
Useful: Are the questions asked in a way that will give data that is fit for purpose? Could a different style of question have given more useful data?
Questions to ask about data displays:
What is the key message of the graphs/visuals?
How do the visuals support the numerical information?
Are any aspects of the visuals misleading?
Make sure when you are discussing statistical results, you consider and discuss causation vs correlation. Implying causation from correlation will prevent you from gaining an excellence grade.
What other variables might be affecting the response variable? Is there a third variable that helps us explain these associations (a ‘confounding’ variable)?
Can we establish causation just because we see a correlation?
The final Evaluation considers the statistically based report as a whole.
You should also give an overview of the biggest problems with the report, and suggest improvements to the study.
Are the findings suitable for the report’s purpose?
To what extent are the results unbiased and reliable?
Are there any underlying or lurking variables that may have an impact on the outcome?
This AI tool uses the "evaluate a statistical report" focus questions to summarise what should be highlighted in a study report.
UPLOAD the report to consider using the + then ask it "give me a list of what to consider when evaluating this report"!
NOTE that the intention of this tool is to help guide your reading of your chosen report, not to do your evaluation for you. Please do not copy and paste from the tool and don't forget that it might miss things or make things up, so you need to check what it says against the report itself.
This is an open book assessment.
You should:
Identify the purpose of the report
Identify and comment on features of the survey relevant to the report’s purpose.
Features of the report could include:
population measures and variables;
sampling methods;
survey methods;
sampling and possible non-sampling errors;
sample size.
Identify and comment on findings of the survey relevant to the report’s purpose.
Make clear links to the context, including reference to relevant background information.
The quality of your evaluation, including your discussion, reasoning and justification, and how well you link this to the context of the report, will determine your grade.
What data is displayed in the report?
What type of data is it, categorical or numerical?
How was the data displayed in the report?
Are there displays or measures included that are appropriate for the type of data?
What summary statistics were used in the report?
How accurate is the data?
Where does the data quoted / used in the report come from?
What survey questions were asked?
Were the survey questions appropriate?
Could the survey questions be misinterpreted or not give the data needed?
What were the variables of interest?
How were the variables of interest measured?
Do the comments (descriptions) made in the report reflect accurately the data given?
Are any comments misleading or biased?
Could alternative analyses be made?
Could the data have been interpreted in another way?
What important data / information is not present?
What questions is the report answering (what is the investigative question(s))?
Who is the report intended to be about (who is the intended population)?
Who is the report aimed at (who might be interested in the outcomes)?
What is the purpose of the report?
What further information is needed?
Are there any underlying or lurking variables that may have an impact on the outcome?
Are the claims made in the statistically based reports valid?
Achievement Standard 91266 (version 3)
Clarifications (updated 2019)