The Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Math Forum » Discussions » sci.math.* » sci.stat.math

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Adjustment of alpha to sample size (Prof Rubin, Prof Koopman et al., could you help this amateur again?)
Replies: 2   Last Post: Mar 4, 2013 5:31 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Gaj Vidmar

Posts: 21
Registered: 12/13/04
Adjustment of alpha to sample size (Prof Rubin, Prof Koopman et al., could you help this amateur again?)
Posted: Feb 28, 2013 7:04 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

Some time ago I asked a question here about adjustment of alpha to sample

I'm using it (broadly speaking, to roughly clarify) in the field of SPC/SQC
("eclectic", "pragmatic", i.e., anything goes that allegedly works :)
My sort-of-control-chart invloves prediction interval from regression
(through origin, if anyone remembers or wants to search for the original
where I adjust the confidence level (i.e., 1-alpha) so that there is always
one half of a point to be expected outside the limits
(maybe bizzare or silly, maybe not - let's leave that aside). So my formula

(1-aplha) = (n-0.5)/n

At first, I had called this approach (half seriously) idiot's FDR, but
further reading
convinced me that albeit not entirely unrelated (as a concept),
FDR is always and only about multiple test (to put it simply).
Fortunately, I've found three Bayesian references that
(at least my blockheadness guesses so)
argue for this type of thinking (listed below; the closest to something I
can at least
partly understand is #3). Should be enough, especially because Bayesian is
more "in" than FDR (although I'm even more clueless about it, but the
don't know that, and I'm skilled at conning scientific journal readers :o)

However, in the newsgroup, Prof Herman Rubin had written that "The level
decrease with increasing sample size. In low dimensional problems,
with the cost of incorrect acceptance going as the k-th power of the error,
the rate at which the level decreases should be about 1/n^((d+k/2)."

So it would be wonderful if one of you wizzards came up with some "simple
algebra" (as it's so often said in the literature when the opposite is true
for mere mortals) that relates Prof Rubin's formula to mine!

I can guess that "the level" refers to alpha and that my problem is
low dimensional (1D). "Incorrect acceptance" most probably means
"incorrect acceptance of the null hypothesis".
And let's say I take k to be 2 (by analogy with the notorious Taguchi,
or because quadratic loss sounds familiar to many people in many fields).
So far so good; but what does d stand for??

Fortunately (or un-, from a broader perspective) my paper (currently under
should get accepted even without such ellegant justification. And all I can
do to
return the favour is an acknowledgement . (Needless to say, co-authorship
is not something that the two Profs I mention in the title -- and other
"heavyweights", especially the retired ones -- want or need, anyway).

So, as usual, thanks in advance for any help.

Gaj Vidmar

1. Seidenfeld T, Schervish MJ, Kadane JB. Decisions without ordering.
In "Acting and Reflecting", Sieg W (ed). Kluwer: Dordrecht, 1990, 143-170.
2. Berry S, Viele K. Adjusting the alpha-level for sample size.
Carnegie Mellon University Department of Statistics Technical Report 1995;
3. Berry S, Viele K. A note on hypothesis testing with random sample sizes
and its
relationship to Bayes factors. Journal of Data Science 2008; 6(1): 75-87.

Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum at NCTM 1994-2018. All Rights Reserved.