General comments:
There is no one all-inclusive formula for calculating sample size. Calculating sample size (or statistical power, type 2 error) depends on the following:
The size of the difference that has practical meaning to the experimenter. (The effect size.)
The amount of variation.
Other considerations, including: The type of data, the statistical method, the null hypothesis and null distribution.
The size of the difference that is meaningful, that the experimenter wishes to detect.
The effect size divided by the standard deviation.
(Root Mean Sum Standardized Effect Size) for ANOVA
The null distribution is influenced by the type of data, the null hypothesis, the experimental design - including randomization, the statistical method.
The null distribution might loosely be thought of as being the population distribution (which can not be known) if there were no differences between groups in the population
A hypothesis test asks the question ... if there were no differences between groups in the population, what are the chances of seeing the differences that may be present in the sample (experimental) data?
noncentrality parameter
the number of uncensored observations in the presence of censoring
simulation-based approach
NIST - Sample size computations (Univariate)
Arsham - Power of a Test and Size of an Effect
Cornell paper on Experimental Power and Design
Includes R code for t-test, O-C curves, and for simulation.
A free statistical power analysis program, G*Power 3, is available at: http://www.psycho.uni-duesseldorf.de/abteilungen/aap/gpower3
(I have not vetted this software myself, but the odds are good that, if properly used, the results are fine.)
Power Analysis and Sample Size using R
n = both
n = embarrassingly low
n = wasted lab time
n = anemic
n = irresponsible