Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.stat.math.independent

Topic: covariance matrix, correlation matrix, decomposition
Replies: 4   Last Post: Nov 24, 1999 5:37 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Michael Pronath

Posts: 14
Registered: 12/15/04
Re: covariance matrix, correlation matrix, decomposition
Posted: Nov 24, 1999 5:11 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply


Michael Pronath <mcp@eda.ei.tum.de> writes:

> It is often necessary to decompose a given covariance matrix C
> (resp. correlation matrix R) as C=G*G'. Cholesky decomposition is
> commonly used here.
> But there are some others possible. Starting from the eigenvalue
> decomposition C=Q*L*Q', there is
>
> G1 = Q*sqrt(L), as G1*G1' = Q*sqrt(L)*sqrt(L)*Q'=Q*L*Q' = C
>
> or
>
> G2 = Q*sqrt(L)*Q', as G2*G2' = Q*sqrt(L)* Q'*Q *sqrt(L)*Q' =
> = Q*sqrt(L)* 1 *sqrt(L)*Q' = C
>
> ...
>
> I'd like to know if anybody has made some more profound analysis about
> this, and the pro's and con's of the various methods.



Sargis Dallakyan <sargis@cerfacs.fr> writes:

> IMO if you have a covariance matrix and want to decompose it then sure
> enough Cholesky is the best. But if you want to generate a random
> vectors with a desired confidence ellipsoid then 2) and 3) gives you a
> direct way to do that.


Why do you think that Cholesky is the best? It is commonly used for
that task, and I wanted to know the reason.



Helmut Jarausch <jarausch@igpm.rwth-aachen.de> writes:

> What about an SVD of G (not C !) Just computing C (not even
> decomposing) makes the condition number worse. Say, you have the
> SVD of G : G = U S V' where U and V are orthogonal matrices and S
> has nontrivial elements only on its diagonal. Then G*G' = U S S' U'
> where SS' has the diagonal (s_{ii}^2) and zeros elsewhere. You can
> extract all information about C from U and S and it should be much
> more.


Indeed, if T is the diagonal matrix of the standard deviations, and R
is the correlation matrix, then C=T*R*T'. Then, the decomposition
G*G'=C=T*R*T' and so inverse(T)*G*G'*inverse(T') = R = H*H', with
H=inverse(T)*G. So, it is probably better to decompose R=H*H' (and
then calculate G=T*H if necessary).

Nevertheless, for the decomposition R=H*H', the same options remain:
Cholesky or Eigenvalue I or II? Which is when the best and why?

At a first glance, I like Eigenvalue I best, because it generates
orthogonal grids. As C and R are symmetric, and the number of
statistical parameters is commonly < 100, the eigenvalue
decomp. should be fast and stable. Plus, the parameter ordering does
not influence the decomposition, as it does in Cholesky.

As the decomposition R=G*G' is frequently used when generating random
numbers, grids in statistical parameter spaces, etc., I assume that
somebody has already performed an analysis of this, but where?


Michael Pronath

--





Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.