This table is useful to determine the quality of the articles that are shortlisted for use within the literature review.


Checklist For the Purpose of Rating Journal Articles

This checklist has been drafted by Claire Bryan-Hancock on the 19th May with reference to Tooth et al (2005)[1].




1. Are the objectives or hypothesis of the study stated?


2. Is the target population defined?

The group of persons toward whom inferences are directed. Sometimes the population from which a study group is drawn.

3. Is the sampling frame defined?

The list of units from which the study population will be drawn. Ideally the sampling frame would be identical to the target population, but it is not always possible.

4. Is the study population defined?

The group selected for investigation.

5. Is the variable being assessed adequately operationalised?

An adequate clinical definition of the injury/illness of interest qualitatively explained.

6. Are the study setting (venues) and/or geographic location stated?

Comment required about location of research. Could include name of centre, town, or district.

7. Is the population studied representative of the population which it is being generalised to?

Particularly if hospital data, is the population measured representative of all those living within the community.

8. Is the population stratified by age, gender or other variables of interest?

Results for participants also reported in terms of their age, gender or other variables.

9. Are the dates between which the study was conducted stated or implicit?


10. Are eligibility criteria stated?

The words “eligibility criteria” or equivalent are needed, unless the entire population is the study population.

11. Are the issues of “selection in” to the study mentioned?

Any aspect of recruitment or setting that results in the selective choice of participants (e.g., gender of health status influenced recruitment).

12. Is the number of participants justified?

Justification of number of participants needed to detect anticipated effects. Evidence that power calculations were considered and/or conducted.

13. Are the numbers meeting and not meeting the eligibility criteria stated?

Quantitative statement of numbers.

14. For those not eligible, are the reasons why stated?

Broad mention if the major reasons.

15. Are the numbers of people who did/didn’t consent to participate stated?

Quantitative statement of numbers.

16. Are the reasons that people refused to consent stated?

Broad mention of the major reasons.

17. Were consenters compared with non-consenters?

Quantitative comparison of the different groups.

18. Was the number of participants or potential participants at the beginning of the study stated?

Total number of participants that could be eligible to participate, or consented to participate, but then dropped out or were excluded.

19. Were the methods of data collection stated?

Descriptions of tools (e.g., surveys, physical examinations) and processes (e.g., face-to-face, telephone).

20. Was the reliability (repeatability) of measurement methods stated?

Evidence of reproducibility of the tools used.

21. Was the validity (against a “gold standard”) of measurement methods mentioned?

Evidence that the validity was examined against, or discussed in relation to, a gold standard.

22. Were any confounders mentioned?

Confounders were defined as a variable that can cause or prevent the outcome of interest, is not an intermediate variable, and is associated with the factors under investigation.

23. Was potential bias identified?

Biases that may have affected the results either as a consequence of the testing materials, population or researchers identified.

24. Was the type of statistical analysis conducted stated?

Specific statistical methods mentioned by name.

25. If relevant, was the number of participants at each follow-up specified?

Quantitative assessment of numbers of participants at each follow-up of testing.

26. If relevant, were the reasons for participant loss of follow-up stated?

Broad mention and quantification of the major reasons.

27. Was the participant loss to follow-up taken into account in the analyses?

Specific mention of adjusting for, or stratifying by, loss of follow-up.

28. Were confounders accounted for in the analysis?

Specific mention of adjusting for, or stratifying by, confounders.

29. Was the impact of biases assessed qualitatively?

Specific mention of biases affecting results, but magnitude not quantified.

30. Was the impact of biases assessed quantitatively?

Specific mention of numerical magnitude of biases.

31. Did the authors relate results back to the target population?

A study is generalisable if it can produce unbiased inferences regarding the target population (beyond the subjects in the study). Discussion could include that generalisability is not possible.

32.  Was there any discussion of generalisability?

Discussion of generaliseability beyond the target population.

33. Were absolute effect sizes reported?

Absolute effect was defined as the outcome of an exposure expressed, for example, as the difference between rates, proportions, or means, as opposed to the ratios of these measures.

34. Were the relative effect sizes reported?

Relative effects were defined as a ratio of rates, proportions, or other measures of an effect.


[1] Tooth L, Ware R, Bain C, Purdie DM & Dobson A 2005, ‘Quality of reporting of observational longitudinal research’, American Journal of Epidemiology, vol. 161, no. 3, pg. 280-288.