Please see what to provide your methodological consultant.
Statistical power is the probability of finding an effect if it really exists. It is a function of sample size (N) and the size effect to be detected (e.g., d or a risk difference). Other important considerations are the acceptable level of making a type-I error (α), conventionally set at 5%. A type-I error is concluding the treatment works when, in truth, it does not. Power (1-β) is the probability of not making a type-II error (β). A type-II error is concluding the treatment does not work, when, in truth, it does work. Power less than 80% is commonly considered inadequate. Power is the probability, over a long run of identically constructed experiments, of finding a significant effect assuming a true population effect (d).
When planning a study, we generally can only tolerate one unknown among β, α, d, and N. We solve for the unknown making assumptions about the values for the others.
When planning a randomized controlled trial (RCT), investigators often know what sample size is feasible to enroll. When this is true, we can calculate the minimum detectable effect (d) using conventional levels of making a type-I error (α = .05) with 80% power (β = .2).
The ANCOVA model (outcome regressed on baseline value of the outcome, and treatment group assignment) is the most common analytic approach to evaluating RCTs. This is because it is powerful (it is statistically efficient) and easily implemented and understood. The method is discussed in Everitt & Wessely (2008) Clinical trials in psychiatry. John Wiley & Sons.
The minimum detectable effect for a continuous outcome (y) in an ANCOVA[1] approach can be determined with the following equation:
where d is a standardized mean difference in the outcome across treatment groups at follow-up (standardized to the pooled baseline standard deviation; Cohen's d) and n is the (effective) per-group sample size, and r is the pre-post correlation of the outcome at baseline and follow-up. The number 4 encodes the assumption that the type-I error level is 5% and the type-II error level is 20% (i.e., 80% power; c.f. Lehr R. Sixteen S‐squared over D‐squared: A relation for crude sample size estimates. Statistics in medicine. 1992;11(8):1099-1102.) If you expect missing data (e.g., attrition), use the complete data sample size even if you plan (as you should) on an as-randomized intention-to-treat mode of analysis.
This calculation is easy to do with a hand calculator or spreadsheet.
such as designs with more than one follow-up will require simulation studies, and we won't explain these here (but see this work in progress). For that, you will probably want to work with a methodological consultant.
To work with a methodological consultant, provide a table similar to that what you will need for your NIH/Forms F pages. Below is a google sheet template you can open, save to your own google drive or download as Excel, and fill out. Also provide information about randomization and the allocation ratio, anticipated missing data, and other important design considerations.
Please see this web page. This web page also has a checklist you, as the principal investigator, can use to make sure you have the information you need to convince reviewers that your design is adequate to test your hypotheses.
The ANCOVA (analysis of covariance) is perhaps the most common approach to analyzing data from a randomized controlled trial with a continuous dependent variable. Briefly, what is done is the follow-up value of the outcome variable is regressed on the baseline value of the treatment variable, and treatment assignment. Other adjustment variables should include factors used in randomization (e.g., strata, site). It is in the investigators best interest to also adjust for covariables that are observed before randomization and are strongly related to the outcome. Doing so may boost power, even if the covariable is not differentially distributed by treatment group. These adjustment factors should be defined before data collection.