Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.stat.edu.independent

Topic: Difference between groups for many variables
Replies: 1   Last Post: May 19, 2006 9:49 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View  
Bruce Weaver

Posts: 738
Registered: 12/18/04
Re: Difference between groups for many variables
Posted: May 19, 2006 9:49 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

Øyvind Langsrud wrote:
> Sullivan2000 skrev:
>> It seems to me that MANOVA might be appropriate
>> http://en.wikipedia.org/wiki/MANOVA

>
> Classical MANOVA perform poorly in cases with several highly correlated
> responses and the method collapses when the number of responses exceeds
> the number of observations. A relatively new method, named as 50-50
> MANOVA, is made to handle this problem. Principal component analysis is
> an important part of this methodology.
>
> Kevin E. Thorpe wrote:

>> One of the problems with the bonferroni approach is that it can be
>> too conservative when many comparisons are involved. An alternative
>> is to control the false discovery rate. See

>
> Bonferroni is too conservative in the sense that a conservative upper
> bound for the p-value is calculated. As mentioned by Gregor Gorjanc,
> the problem is that dependence between the response variables is not
> treated. By using rotation testing (a simulation technique) it is
> possible to calculate adjusted p-values in an exact (under multivariate
> normality) and non-conservative way. FDR is less conservative in the
> sense that a less conservative error rate criterion is used.
>
> Kevin E. Thorpe wrote:

>> Of course, the whole issue of whether or not to correct for multiple testing
>> is contentious. For an artificial example, suppose you were testing 20
>> outcomes and all of them had p-values of 0.003 (the bonferroni corrected
>> p is 0.0025). Would you believe that there was no difference between the
>> groups simply because non of the p-values passed the threshold?

>
> When all responses have the same p-value, the underlying reason is that
> these responses are extremely highly correlated. This would be handled
> by the rotating testing method. The adjusted p-values would be the same
> as the unadjusted ones.
>
> The program available at http://www.matforsk.no/ola/program.htm
> performs general linear modelling. For each model term it is possible
> to calculate
> - A single 50-50 MANOVA p-value.
> - Ordinary single response p-values.
> - Adjusted p-values according to familywise error rates calculated by
> rotation testing ("improved Bonferroni").
> - Adjusted p-values according to false discovery rates calculated by
> rotation testing.
>
> Øyvind Langsrud
>



Hi Øyvind. Judging by the abstracts on your website
(www.matforsk.no/ola/ffmanova.htm), it seems that the logic of your
method is the same as that of principal components regression (PCR). So
any objections one might have to PCR would be equally applicable to
50-50 MANOVA. Is that a fair statement?

Cheers,
Bruce
--
Bruce Weaver
bweaver@lakeheadu.ca
www.angelfire.com/wv/bwhomedir



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.