That is a relevant paper but it seems to me to argue AGAINST n>30 as a rule. Cochran's original rule is also in the second edition of his sampling text, on page 41. There he says, "There is no safe general rule as to how large n must be for use of the normal approximation in computing confidence limits." Then he gives his rule and notes that it only deals with problems of skewness, not any other issues, and that it is for two-sided inference. He gives an example with n=556 where the two tail probabilities are unequal but sum to about 0.05. For this data he applies his rule and concludes that a sample size of n=90 probably would have sufficed.
Cochran's rule and Sugden's work are mentioned in the second edition of Lohr's sampling text (p.44). She gives an example where the revised rule gives a minimum sample size of 193. She also says, "The 'magic number' of n=30 often cited in introductory statistics texts as a sample size that is 'sufficiently large' for the central limit theorem to apply, often does not suffice in finite popualtion sampling problems."
Whichever version of Cochran's rule you use, the minimum sample size is a function of population shape, not a constant. The samples would have to be even larger if you wanted to do one-sided tests, as is common in AP.
Forwarded message: > From: "Chris Olsen" <COlsen@mchsi.com> > > Bob and All -- > > > There are some mathematical and historical reasons to believe that > > when n > 30, this is a safe point at which to believe that a sample > > mean is becoming approximately normal. > > Found the article I was thinking of. It wasn't written by Cochran, but > refers to Cochran's writing... > > The article is: Sugden, R. A., et al. (2002). Cochran's Rule for > Simple Random Sampling. J of the Royal Statistical Society, Series B, > Statistical Methodology. 62(4):787-793. > > They reference Cochran, Sampling Techniques, p42. (I only have my > paperback copy of ST at home, but the page# checks out there.) > > -- Chris