It is often necessary to decompose a given covariance matrix C (resp. correlation matrix R) as C=G*G'. Cholesky decomposition is commonly used here. But there are some others possible. Starting from the eigenvalue decomposition C=Q*L*Q', there is
G1 = Q*sqrt(L), as G1*G1' = Q*sqrt(L)*sqrt(L)*Q'=Q*L*Q' = C
G2 = Q*sqrt(L)*Q', as G2*G2' = Q*sqrt(L)* Q'*Q *sqrt(L)*Q' = = Q*sqrt(L)* 1 *sqrt(L)*Q' = C
G, G1, and G2 have different properties (G is triangular, G1'*G1 is diagonal, and G2 is symmetric), so one of them may be preferable over the others in some cases. Note that all three of them can be used to generate random numbers with a given covariance matrix, and all three of them generate exactly the same distribution.
For example, when putting a grid into a space of normal distributed parameters: The grid is generated in a "normed" space (zero mean, unity variance), and each grid point q is then transformed into the "real" space grid point p = p0 + G*q . The shape of the transformed grid depends on the choice of G:
1) Cholesky: The grid is "sheared", angles between grid vertices vary largely between 0 and 180Â° and depend on the order of the parameters 2) G1: The grid is scaled along its axes and rotated. Angles between grid vertices are all 90Â° 3) G2: The grid looks like a rhombus.
The condition of C may be extremely bad, e.g. if the statistical parameters are physical quantities, and you have capacitances (1e-12) as well as donations (1e20). Stability of the decomposition could be an issue here and give an advantage to one of them.
I'd like to know if anybody has made some more profound analysis about this, and the pro's and con's of the various methods.