Schmidt & Brown (2021). Evidence-Based Practice for Nurses:
Chapter 7: Key Concepts and Principles of Quantitative Designs
Explain how the study purpose, literature review, research questions or hypotheses, and the overall study design are interrelated
List elements to be considered when appraising quantitative designs
Categorize types of study designs based on their purpose
Categorize study designs based on the time dimension of data collection across retrospective, cross-sectional, repeated measures, and longitudinal or prospective designs.
Discuss the significance of the key concepts of causality, control, manipulation, bias, and confounding as they relate to quantitative designs
Define the four types of validity: statistical conclusion, internal, construct, and external
Describe strategies to minimize the six threats to internal validity
Identify three factors that affect statistical conclusion validity
List five factors that affect construct validity
Describe strategies to minimize the four threats to external validity
Discuss ethical issues related to study validity
Categorize study designs as retrospective, cross-sectional, repeated measures, and longitudinal or prospective designs based on the timing of data collection
Quantitative designs can be used for 4 key purposes
(1) Examining causality
(2) Predicting relationships and/or differences among variables
(3) Explaining relationships and/or differences among variables
(4) Describing a phenomenon in detail
4 main types of designs
Experimental (to determine causality)
(1) True-Experimental
(2) Quasi-experimental
Nonexperimental (to describe, examine, predict)
(3) Correlational
(4) Descriptive
Experimental vs. Nonexperimental
Experimental
Manipulate the independent variable (IV)
IV = the intervention, or "treatment", that the researcher wants to test in a specific group of people in order to determine the effect that the IV has on the outcome of interest, known as the dependent variable (DV)
5 requirements of a true experimental design
(I) A hypothesis that tests a causal relationship (i.e., testing for the effect that an IV has on a DV)
(II) A treatment group that receives the intervention and a control group that does not get the intervention being tested
(III) Random assignment of participants to treatment/control groups to reduce bias and confounding
(IV) Manipulation of the intervention (IV)
(V) Tight control of the experiment to minimize the influence of confounding variables
Quasi-experimental designs
Involve manipulation of the IV
Lacks either randomization or a control group
Nonexperimental
Lacks manipulation of the IV
Also called observational designs
The researcher "observes" how the variables of interest occur naturally, without the researcher trying to change how the conditions normally exist
1. Retrospective Design
= Research designs when researchers look back in time to determine possible causative factors; ex post facto ("after the fact")
Start with the DV and look back in time to determine possible causative factors
The IV cannot be manipulated and the participants cannot be randomly assigned
Retrospective designs are never experimental in nature
Case-Control Study
= A type of retrospective study in which researchers begin with a group of people who have already been diagnosed with the disease ("cases") and compare them with those who do not have the condition
2. Cross-sectional Design
= Nonexperimental design used to gather data from a group of participants at only one point in time; study design to measure cause and outcome variables as each exits in a population or representative sample at one specific point of time
Provide a snapshot by collecting data about both the IV and DV at the same time
Difficult to establish cause and effect
Cohort Comparison Design
= Nonexperimental cross-sectional design in which more than one group is studied at the same time so that conclusions about a variable over time can be drawn without spending as much time
Advantage
Easier to manage
More economical
Threats of mortality, maturation, and testing are minimized because data are collected only one time from each subject
Disadvantage
Difficult for researchers to make claims about cause and effect
3. Longitudinal Design
= Designs used to gather data about participants at more than one point in time
Prospective designs
Studies over time with presumed causes that follow participants to determine whether the hypothesized effects actually occur
Repeated measures designs
= Research designs where researchers measure subjects more than once over a short period of time
Advantage
Provides baseline data so that before and after comparisons can be made on the same subject
Subjects are likely to remain in the study because the time period is short
Panel design
= Longitudinal design where the same participants, drawn from the general population, provide data at multiple points in time over a long period of time and at specified intervals
Trend
= A type of longitudinal design to gather data from different samples across time
Follow-up study
= A longitudinal design used to follow participants, selected for a specific characteristic or condition, into the future
Cohort studies
Can be nonexperimental or experimental follow-up studies
Crossover designs
= Experimental designs that use two or more treatments; participants receive treatments in random order
Advantage
Provide important information about the chronological relationships that exist between the IV and DV by determining changes over time
Can be used to test cause and effect
Disadvantage
Cost in following participants over an extended period of time
Causality
= the relationship between a cause and its effect
Probability
= Likelihood or chance that an event will occur in a situation
Control
= Ability to manipulate, regulate, or statistically adjust for factors that can affect the dependent variable
Manipulation
= The ability of researchers to control the independent variable
Confounding
= When extraneous variables influence the relationship between the independent and dependent variables
Extraneous variables
= Factors that interfere with the relationship between the independent and dependent variables; confounding variable; Z variable
Bias
= Systematic error in selection of participants, measurement of variables, and/or analysis of data that distorts the true relationship between IV and DV
Randomization
= The selection, assignment, or arrangement of elements by chance
Random Sampling
= Technique for selecting elements (e.g. participants, chats) whereby each has the same chance of being selected
Random Assignment
= Assignment technique in which participants have an equal chance of being assigned to either the treatment or the control group
Between-groups designs
= Study design where two groups of participants can be compared
Within-groups design
= Comparisons are made about the same participants at two or more points in time or on two or more measures
Study validity
= Ability to accept results as logical, reasonable, and justifiable based on the evidence presented
4 types of validity
1. Statistical conclusion validity
= The degree that the results of the statistical analysis reflect the true relationships between the independent and dependent variables
2. Internal validity
= The degree to which one can conclude that the independent variable produced changes in the dependent variable
3. Construct validity
= A threat to validity when the instruments used do not accurately measure the theoretical concepts
4. External validity
= The degree to which the results of the study can be generalized to other participants, settings, and times
1. Low statistical power
Type I errors
Occur when the researcher rejects a true null hypothesis
Type II errors
Happen when a researcher accepts a false null hypothesis, or inaccurately concludes that there is no relationship between the IV and DV when an actual relationship does exist
Statistical power
= the probability that a statistical test will be able to detect a significant relationship or difference between variables if the relationship/difference actually exists
Sample sizes may influence statistical power
2. Low reliability of the measures
Instruments that are not reliable interfere with researchers' abilities to draw accurate conclusions about relationships between the IV and DV
When appraising research articles, assess:
If self-report instruments achieved an internal consistency reliability of .70 or higher
Test-retest reliability of .80 or higher
If more than one data collector was used, interrater reliability of at least .90 or higher
3. Lack of reliability of treatment implementation
Occur if different researchers or their assistants have implemented the treatment (IV) differently to different participants or if the same researcher is inconsistent in implementing the treatment from one time to another
1. Selection bias
= A threat to internal validity when the change in the dependent variable is a result of the characteristics of the participants before they entered a study
2. History
= A threat to internal validity when the dependent variable is influenced by an event that occurred during the study
3. Maturation
= A threat to internal validity when participants change by growing or maturing
4. Testing
= A threat to internal validity when a pretest influences the way participants respond on a posttest
5. Instrumentation
= A threat to internal validity when there are inconsistencies in data collection
6. Mortality
= A threat to internal validity when there is a loss of participants before the study is completed; attrition rate
Attrition rate
= Dropout rate; loss of participants before the study is completed; threat of mortality
Participant burden
= The amount of participant effort and time required for being in a study
1. Inadequately defined constructs
Construct
= A term used when referring to concepts and variables together
It should be be easy to see the relationships when an article is well written, see the figure as an example
2. Bias
= A systematic error in selection of participants, measurement of variables, or analysis
3. Confounding
= A possible source of bias in a study in which an unmeasured, or extraneous, variable (the confounder) distorts the true relationship between the treatment and outcome variables
Often referred to as a mixing of effects because the relationship between the IV and DV is "mixed" when the effects of a confounding variable
4. Reactivity
Occurs when the act of participating in a research study changes the behavior of the participants
"hypothesis guessing"
Participants try to guess what responses the researcher wants and will change their behavior based on those guesses, which can affect the influence of the intervention being conducted
"socially desirable"
Participants answer questions in a way that is "socially desirable" rather than based on their own beliefs or preferences
Hawthorne effect
First recognized in studies done at Western Electric Corporation's Hawthorne plant
IV = the amount of lighting
DV = worker productivity
See YouTube video for more details
5. Experimenter expectancies
When researchers have expected or desired outcomes in mind, they may inadvertently affect how interventions are conducted and how they interact with participants
Double-blind experimental designs
= Studies in which participants and researchers are unaware whether participants are receiving experimental interventions or standard care
1. Effects of selection
= Threats to external validity when the sample does not represent the population
2. Interaction of treatment and selection of subjects
= A threat to external validity where the independent variable might not affect individuals the same way
3. Interaction of treatment and setting
= A threat to external validity when an intervention conducted in one setting cannot be generalized to a different setting
4. Interaction of treatment and history
= A threat to external validity when historical events affect the intervention
Quantitative Research Part 1 (28:09)
Quantitative Research Part 2 (21:30)
Quantitative Research Part 3 (16:09)