Date: Dec 10, 2012 2:34 AM
Author: Ray Koopman
Subject: Re: Am resending the last PDF sent off-line, since I've now learned<br> how to highlight the line of interest against the random backdrop.

On Dec 9, 8:23 pm, djh <halitsk...@att.net> wrote:
> Am resending the previously sent PDF SNa1_1_for_Rubq, since I've now
> learned how to highlight the line of interest against the random
> backdrop.
>
> It looks to me like u*e MIGHT be of possible interest, as well as u^2,
> but this is of course your call to make.


1. When the IOT test is not clear, there are many ways to do a proper
test of the hypothesis that the p-values come from a Uniform[0,1]
distribution, but such "goodness-of-fit" testing is not an area that
I know much about. The standard reference is (or was) d'Agostino &
Stephens, _Goodness-of-Fit Techniques_. The most popular tests seem
to be the Anderson-Darling, Cramér-von Mises, Watson, Shapiro-Wilk,
Kuiper, and Kolmogorov-Smirnov. They differ in their sensitivity to
different types of departure from uniformity, but I've forgotten
which is better for detecting what kind of non-uniformity. Also,
I have a vague recollection that their p-valuues are hard to get
unless the sample size is "large" (which yours are not).

2. There is rule that, except in special circumstances (that do not
hold in your case), if the predictor set in a regression model
contains the product of two variables (e.g., e*u) then both factors
must also be in the predictor set, and if it contains a power (e.g.,
u^2) then it must also contain all lower powers of that variable.
Hence there is no point in testing the coefficients of e and u in
the regression of c on (e,u,e*u,u^2).