Matt J
Posts:
4,992
Registered:
11/28/09


Re: Nonlinear optimization
Posted:
Mar 7, 2013 3:50 PM


"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <kh9it4$qg1$1@newscl01ah.mathworks.com>... > "Matt J" wrote in message <kh8skk$s6c$1@newscl01ah.mathworks.com>... > > > Bear in mind also that generic LM doesn't use the Jacobian of the cost function F(x), but rather the Hessian of F, or equivalently the Jacobian of F's gradient. This not only means a possibly intensive Hessian calculation, but also when you solve the update equation > > What you pointed out is the difference between QuasiNewton and Newton. True both can be implemented with LM. > > In practice quasiNewton is quasi sufficient. ======================
I'm not sure what link with quasiNewton that you're referring to. If you're saying that
(H+lambda*I) x=gradient
is an LM generalization of Newton's method, then yes, I'm sure NewtonLM would converge faster, however each iteration looks costly. You would have to know the minimum eigenvalue of H in order to make (H+lambda*I) positive definite.

