Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.math.independent

Topic: Test constantness of normality of residuals from linear regression
Replies: 8   Last Post: Jan 10, 2013 6:09 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Michael Press

Posts: 2,115
Registered: 12/26/06
Re: Test constantness of normality of residuals from linear regression
Posted: Jan 10, 2013 6:00 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

In article
<e210e360-0b75-4c59-9225-50a0133a5fca@px4g2000pbc.googlegroups.com>,
Ray Koopman <koopman@sfu.ca> wrote:

[...]

> It all depends on what you want. Look up the Gauss-Markov theorem.
> To justify the usual OLS estimates of the regression coefficients,
> the errors need only to be unbiased, uncorrelated, and homoscedastic,
> but to justify all the usual p-values and confidence regions, the
> errors must be iid normal.
>
> However, that's considering only the theoretical justification.
> In practice, what matters is not whether the assumptions are right
> or wrong, but how wrong they are -- they're never exactly right.
>
> Normality is probably the least important assumption. The most
> important things to worry about are the general form of the model
> and whether it includes all the relevant predictor variables. Then
> you ask how correlated and/or heteroscedastic the errors might be.
> Finally, you might wonder about shapes of the error distributions.
> Minor departures from normality are inconsequential. Nothing in the
> real world is exactly normal, and any test of normality will reject
> if the sample size is big enough.


Assuming that the errors are normally distributed is
equivalent to assuming that the errors have mean zero
and fixed variance (using the new word I heard today:
homoscedastic) in that those assumptions least affect
how close our analysis gets to discerning the
parameters of interest. Normality is a bad assumption
only if we are suppressing some knowledge of how the
errors are distributed beyond the initial assumptions.
If it somehow turns out that a different set of
assumptions about the errors is better, for some value
of better, then that is called scientific discovery,
not bad assumptions. We should get to the point where
we cannot wring any more meaning out to the data and
are left with errors normally distributed around zero.

It is not that I said anything more than you about the
mathematics and statistics---only voiced my perspective
on the process. If you see that I am in error, normal
for me, I welcome hearing about it.

--
Michael Press



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.