Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.


Math Forum
»
Discussions
»
sci.math.*
»
sci.stat.edu
Notice: We are no longer accepting new posts, but the forums will continue to be readable.
Topic:
Difference between groups for many variables
Replies:
1
Last Post:
May 19, 2006 9:49 AM




Re: Difference between groups for many variables
Posted:
May 19, 2006 9:49 AM


Øyvind Langsrud wrote: > Sullivan2000 skrev: >> It seems to me that MANOVA might be appropriate >> http://en.wikipedia.org/wiki/MANOVA > > Classical MANOVA perform poorly in cases with several highly correlated > responses and the method collapses when the number of responses exceeds > the number of observations. A relatively new method, named as 5050 > MANOVA, is made to handle this problem. Principal component analysis is > an important part of this methodology. > > Kevin E. Thorpe wrote: >> One of the problems with the bonferroni approach is that it can be >> too conservative when many comparisons are involved. An alternative >> is to control the false discovery rate. See > > Bonferroni is too conservative in the sense that a conservative upper > bound for the pvalue is calculated. As mentioned by Gregor Gorjanc, > the problem is that dependence between the response variables is not > treated. By using rotation testing (a simulation technique) it is > possible to calculate adjusted pvalues in an exact (under multivariate > normality) and nonconservative way. FDR is less conservative in the > sense that a less conservative error rate criterion is used. > > Kevin E. Thorpe wrote: >> Of course, the whole issue of whether or not to correct for multiple testing >> is contentious. For an artificial example, suppose you were testing 20 >> outcomes and all of them had pvalues of 0.003 (the bonferroni corrected >> p is 0.0025). Would you believe that there was no difference between the >> groups simply because non of the pvalues passed the threshold? > > When all responses have the same pvalue, the underlying reason is that > these responses are extremely highly correlated. This would be handled > by the rotating testing method. The adjusted pvalues would be the same > as the unadjusted ones. > > The program available at http://www.matforsk.no/ola/program.htm > performs general linear modelling. For each model term it is possible > to calculate >  A single 5050 MANOVA pvalue. >  Ordinary single response pvalues. >  Adjusted pvalues according to familywise error rates calculated by > rotation testing ("improved Bonferroni"). >  Adjusted pvalues according to false discovery rates calculated by > rotation testing. > > Øyvind Langsrud >
Hi Øyvind. Judging by the abstracts on your website (www.matforsk.no/ola/ffmanova.htm), it seems that the logic of your method is the same as that of principal components regression (PCR). So any objections one might have to PCR would be equally applicable to 5050 MANOVA. Is that a fair statement?
Cheers, Bruce  Bruce Weaver bweaver@lakeheadu.ca www.angelfire.com/wv/bwhomedir



