In ABA, the independent variable (IV) is the intervention or treatment that is being studied, and the dependent variable (DV) is the target behavior which the IV aims to impact. The IV is manipulated by the researcher in order to produce and prove an effect on the DV. The goal of research in ABA is to prove a functional relationship between an IV and a DV. Click here to read more.
Example: A token economy reinforcement system (IV) may be used to influence students' homework completion rate (DV)
Internal Validity: the extent to which a change in a DV can be attributed to the effect of an IV and not to any other circumstances. In other words, internal validity exists when researchers use sound experimental design to demonstrate a functional relationship between a DV and IV, which cannot be explained by any other variable. Any circumstances which have the potential to influence the results of a study outside the IV being manipulated are considered confounding variables and potential threats to the internal validity of the study. Here are some examples-
History: external events or circumstances which are out of the control of the researcher influence results
Maturation: participants experience natural growth in maturity over the course of research, affecting results
Attrition: participants drop out of research, influencing results
Testing Effects: participant awareness of being studied/familiarity with instruments influence results
Instrumentation: changes to the measurement procedure influence results
Selection Bias: participants are selected or recruited in ways which bias results
Statistical Regression: tendency of scores to be rated closer to average with repeated measurement, affecting results
External Validity: the extent to which an intervention can be replicated and generalized to other participants, settings, and behaviors. Issues which prevent results from being replicable with other populations, settings, or behaviors are considered threats to external validity. Here are some examples-
Ecological: when an intervention does not transfer to an organic/non-clinical setting
Population: when an intervention works only with a sample that is not representative of the broader population
Temporal: time-specific results which cannot be replicated at a later time
Situational: studies done in certain contexts may only be applicable within that same context
Cross-cultural: when cultural context prevents an intervention from being generalized to other cultures
Certain features distinguish single-subject design from other types of research:
Subject serves as their own control: Rather than comparing separate individuals are group, some of whom receive an intervention and some of whom serve as controls, in SSD each participant experiences both the control and intervention conditions. Each participant's response to intervention is measured against their own response to the control conditions.
Repeated measures: Data must be collected on the DV repeatedly, to establish a pattern and the effectiveness (or lack thereof) of the intervention being tested.Â
Prediction: A study ought to be designed and implemented so that the results enable researchers to make predictions about future behavior based on the establishment of a functional relationship between a DV and IV. Visual representation of data can be helpful with prediction.
Verification: Involves demonstrating that the target behavior would have remained unchanged except for the introduction of an intervention. In this way researchers verify that change cannot be explained by other variables.
Replication: SSD studies are designed to replicate a subject's response (DV) to an intervention (IV) and establish a pattern of control. See the Common Experimental Designs section below for more information about how each design addresses replication.
Single-subject design research seeks to answer different questions than group design. Group design focuses on broad trends and average results, with can streamline data and minimize the impact of individual variability. It can have improved external variability. SSD focuses on an individual's experience. It can be tailored to individual needs and flexible in response to data trends. Read more here.
Single-subject design can take many forms. Here are a few of the most common experimental designs:
Reversal Design: Also known as ABAB design. A baseline condition (A) is alternated with an intervention condition (B) to establish experimental control of the DV by the IV. The behavioral response (DV) to the intervention (IV) is verified by the return-to-baseline condition and replicated when the intervention is reintroduced.
Multiple Baseline Design: Measures the effect of an intervention (IV) on two or more behaviors, subjects, or settings (DV). Intervention is applied in staggered succession to verify the effect of the intervention and replicate the results of the intervention.
Multi-element Design: Also known as Alternating Treatment design. Two intervention conditions (IV) are introduced in an alternating pattern to determine which has a greater effect on a target behavior (DV).Â
Changing-Criterion Design: Involves changing the criterion for obtaining reinforcement over time to strengthen or fine-tune a behavior that is already in a subject's repertoire. Can be useful in situations when withdrawal is not feasible.Â
Learn more about applying SSD here.
A study can be designed (and/or data can be collected and organized) to produce different types of analysis.
Comparative Analysis: comparing 2 or more interventions to determine which is more effective in addressing a target behavior. Interventions can be alternated to determine which produces better results.
Component Analysis: breaking down a treatment package to determine what part is impacting the target behavior. Researchers may remove one element at a time from a complex intervention to see what impact is seen on the target behavior.
Parametric Analysis: comparing what dosage (amount, intensity, etc.) of an intervention produces the most desirable effect on a target behavior. An intervention can be incrementally increased to determine the ideal dosage.Â
When conducting research within ABA, it is important to consider ethical issues affecting experimentation on human subjects. Experimental design helps to ensure that only evidence-based interventions are attempted and applied by empirically isolating and proving the impact of the intervention. This ensures that effective intervention can be continued, and ineffective intervention is not continued. Adherence to experimental design principles protects both the individual participants and the overall ABA field by promoting treatments and practices which have proven effective and beneficial. Practitioners must consider the specific type of experimental design that is most appropriate for a study. For example, a Reversal design may not be appropriate if withdrawing an intervention causes harm to participants.
Regardless of which experimental design is used, practitioners must follow ethical principles when working with human subjects. First, this means obtaining informed consent from all participants (and/or their guardians) and/or assent when appropriate. Participants must be made aware of the full scope of their participation, the purpose of the research, and all potential benefits or harmful effects. They must also be made aware of how data will be used and shared. Participants must participate voluntarily and be allowed to stop at any time for any reason. Practitioners must take steps to minimize any potential harm and to protect collected data and communicate it in a responsible manner.