"Bruno Luong" <email@example.com> wrote in message <firstname.lastname@example.org>... > "Matt J" wrote in message <email@example.com>... > > > > > I'm not sure what link with quasi-Newton that you're referring to. If you're saying that > > > > (H+lambda*I) x=gradient > > > > is an LM generalization of Newton's method, > > Quasi-Newton -> Replace Hessian by an appoximation of it, usually based from the first derivative, such as BFGS formula or H ~ J'*J in the least square cost function, where J is the Jacobian of the model. > > >then yes, I'm sure Newton-LM would converge faster, however each iteration looks costly. You would have to know the minimum eigenvalue of H in order to make (H+lambda*I) positive definite. > > Both BFGS and J'*J approximation provide quasi convex quadratic approximation. Therefore there is no need to bother with such detail about positiveness. ========
That much, I understand. Maybe I didn't understand what you meant by quasi-Newton being "quasi-efficient". It looks like finding lambda for quasi-Newton-LM would be much more efficient than for true Newton-LM.