Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.

Topic: LU decomposition and Mahalanobis distance
Replies: 2   Last Post: Jul 3, 2006 2:44 PM

 Messages: [ Previous | Next ]
 Gordon Sande Posts: 695 Registered: 12/6/04
Re: LU decomposition and Mahalanobis distance
Posted: Jul 3, 2006 12:04 PM

On 2006-07-03 12:02:58 -0300, "Ross Clement (Email address invalid - do
not use)" <clemenr@wmin.ac.uk> said:

> Dear all,
>
> I am currently using a library (gnu gsl) which takes great pains in
> both API design and documentation to guide people away from using
> matrix inverses explicitly and towards using the LU decomposition. I
> have code which is going to calculate Mahalanobis distance, requiring
> the inverse of covariance matrix. Is there any reason to use a LU
> decomposition in calculating Mahalanobis distance? At present I'm a
> little bit skeptical, as I will invert a (say 100x100) covariance
> matrix once, then calculate *many* M distances. While I don't know
> exactly how to formulate Mahalanobis distance using the LU
> decomposition, or even know if it's possible (googling on both yields
> nothing that obviously answers my question), I would presume that any
> rewrite would require additional computation and therefore time. Would
> there be a significant improvement in numerical accuracy? At present my
> default inclination is to find the actual inverse and then calculate
> distance by the standard formula.
>
> Also, I have to solve some systems of linear equations. But the gsl
> libray includes library routines to do this given a LU decomposition,
> so unless corrected I'm assuming it's a no-brainer to solve them using
> a LU decomposition.
>
>
> Thanks in anticipation,
>
> Ross-c

The whole point of an LU decomposition is to make solving easier.
The inverse is the result of solving for a family of RHSs which
are then used to solve. The answer resulting from LU will have lower
numerical error. Solving with an LU is the same expense as
multiplying by the inverse. It is like finding the solution for all
possible RHSs and then using that for the particullar RHS you have.
The LU decompostion is less work to calculate than an inverse. So
cheaper and more accurrate. Hard to beat.

If you have a symmetric positive definite matrix (a covarince matrix
fits the requirements) then you should use a (Cholesky) LL^t decomposition.
LU is for when there is either no symmetry or it is not positive definite.

All of this is pretty standard introductory numerical linear algebra
which is widely available in either "numerical analysis" or "compuational
statistics" books.

Date Subject Author
7/3/06 Ross Clement
7/3/06 Gordon Sande
7/3/06 IanMike