Matt J
Posts:
4,992
Registered:
11/28/09


Re: Nonlinear optimization
Posted:
Mar 7, 2013 4:36 PM


"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <khavr6$imm$1@newscl01ah.mathworks.com>... > "Matt J" wrote in message <khaui3$efe$1@newscl01ah.mathworks.com>... > > > > > I'm not sure what link with quasiNewton that you're referring to. If you're saying that > > > > (H+lambda*I) x=gradient > > > > is an LM generalization of Newton's method, > > QuasiNewton > Replace Hessian by an appoximation of it, usually based from the first derivative, such as BFGS formula or H ~ J'*J in the least square cost function, where J is the Jacobian of the model. > > >then yes, I'm sure NewtonLM would converge faster, however each iteration looks costly. You would have to know the minimum eigenvalue of H in order to make (H+lambda*I) positive definite. > > Both BFGS and J'*J approximation provide quasi convex quadratic approximation. Therefore there is no need to bother with such detail about positiveness. ========
That much, I understand. Maybe I didn't understand what you meant by quasiNewton being "quasiefficient". It looks like finding lambda for quasiNewtonLM would be much more efficient than for true NewtonLM.

