Date: Dec 10, 2012 10:23 AM
Author: Halitsky
Subject: 1) Just u*e and u^2(!!); 2) IOTs vs “proper” tes<br>	ts

I.

You wrote:

?Hence there is no point in testing the coefficients of e and u in the
regression of c on (e,u,e*u,u^2).?

Thanks! Not only will it cut work load in half, but also allow me to
put the C,N and S,N results for u*e and u^2 | fold,set on one page in
what printers used to call a ?4-up? in the old-days. (See, for
example, the 4-up for a1_1 I?ve sent offline.)

In turn, such 4-ups will not only mean less PDF?s for you to look at,
but may also reveal possible relations between u*e and u^2 that would
otherwise not even be apparent. (I have many questions about such
relationships between u*e and u^2, but will hold off until all the 4-
ups are done.)

II.

You wrote:

?When the IOT test is not clear, there are many ways to do a proper
test of the hypothesis that the p-values come from a Uniform[0,1]
distribution ...?

I?m going to wait till all 18 4-ups are completed for fold x method,
and if some really interesting but IOT-undecidable cases arise within
the 18, I will do the S-W?s using the PERL implementation described
here:

http://search.cpan.org/~mwendl/Statistics-Normality-0.01/lib/Statistics/Normality.pm

That way (heh-heh-heh), I won?t even have to UNDERSTAND the S-W as
described here:

http://www.itl.nist.gov/div898/handbook/prc/section2/prc213.htm

(Although seriously, I am interested in learning exactly how the ?a?
constants in the S-W numerator are ?generated from [...] means,
variances, and covariances [...].

III.

Here?s a dumber-than-usual question about S-W, if you have a moment.

I used the Stata version of S-W back in 2005 to test the original
dicodon over-representation data for normality BEFORE t-testing them.
(I didn?t t-test anything that wasn?t normal.)

And what I thought S-W was doing was seeing how well the data
conformed to the familiar Gaussian or bell curve.

But now we?re talking about S-W measuring departure from a uniform
[0,1] distribution (i.e. the ?random backdrop? in the plots you?ve
taught me how to construct.

Is testing for fit to a Gaussian curve and testing for departure from
a uniform [0.1] distribution the same thing?

If you have time, could you clarify here? I realize it?s elementary,
but when you explain something, I tend to understand it more or less
immediately (as opposed to explanations by the "usual suspects".)