Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.



Adjustment of alpha to sample size (Prof Rubin, Prof Koopman et al., could you help this amateur again?)
Posted:
Feb 28, 2013 7:04 AM


Some time ago I asked a question here about adjustment of alpha to sample size.
I'm using it (broadly speaking, to roughly clarify) in the field of SPC/SQC ("eclectic", "pragmatic", i.e., anything goes that allegedly works :) My sortofcontrolchart invloves prediction interval from regression (through origin, if anyone remembers or wants to search for the original post), where I adjust the confidence level (i.e., 1alpha) so that there is always one half of a point to be expected outside the limits (maybe bizzare or silly, maybe not  let's leave that aside). So my formula is
(1aplha) = (n0.5)/n
At first, I had called this approach (half seriously) idiot's FDR, but further reading convinced me that albeit not entirely unrelated (as a concept), FDR is always and only about multiple test (to put it simply). Fortunately, I've found three Bayesian references that (at least my blockheadness guesses so) argue for this type of thinking (listed below; the closest to something I can at least partly understand is #3). Should be enough, especially because Bayesian is even more "in" than FDR (although I'm even more clueless about it, but the referees don't know that, and I'm skilled at conning scientific journal readers :o)
However, in the newsgroup, Prof Herman Rubin had written that "The level should decrease with increasing sample size. In low dimensional problems, with the cost of incorrect acceptance going as the kth power of the error, the rate at which the level decreases should be about 1/n^((d+k/2)."
So it would be wonderful if one of you wizzards came up with some "simple algebra" (as it's so often said in the literature when the opposite is true for mere mortals) that relates Prof Rubin's formula to mine!
I can guess that "the level" refers to alpha and that my problem is low dimensional (1D). "Incorrect acceptance" most probably means "incorrect acceptance of the null hypothesis". And let's say I take k to be 2 (by analogy with the notorious Taguchi, or because quadratic loss sounds familiar to many people in many fields). So far so good; but what does d stand for??
Fortunately (or un, from a broader perspective) my paper (currently under review) should get accepted even without such ellegant justification. And all I can do to return the favour is an acknowledgement . (Needless to say, coauthorship is not something that the two Profs I mention in the title  and other newsgroup "heavyweights", especially the retired ones  want or need, anyway).
So, as usual, thanks in advance for any help.
Gaj Vidmar
References: 1. Seidenfeld T, Schervish MJ, Kadane JB. Decisions without ordering. In "Acting and Reflecting", Sieg W (ed). Kluwer: Dordrecht, 1990, 143170. 2. Berry S, Viele K. Adjusting the alphalevel for sample size. Carnegie Mellon University Department of Statistics Technical Report 1995; 635. http://www.stat.cmu.edu/tr/tr635/tr635.ps 3. Berry S, Viele K. A note on hypothesis testing with random sample sizes and its relationship to Bayes factors. Journal of Data Science 2008; 6(1): 7587. http://www.jdsonline.com/file_download/159/JDS380.pdf



