
Re: Nonlinear optimization
Posted:
Mar 7, 2013 4:12 PM


"Matt J" wrote in message <khaui3$efe$1@newscl01ah.mathworks.com>...
> > I'm not sure what link with quasiNewton that you're referring to. If you're saying that > > (H+lambda*I) x=gradient > > is an LM generalization of Newton's method,
QuasiNewton > Replace Hessian by an appoximation of it, usually based from the first derivative, such as BFGS formula or H ~ J'*J in the least square cost function, where J is the Jacobian of the model.
>then yes, I'm sure NewtonLM would converge faster, however each iteration looks costly. You would have to know the minimum eigenvalue of H in order to make (H+lambda*I) positive definite.
Both BFGS and J'*J approximation provide quasi convex quadratic approximation. Therefore there is no need to bother with such detail about positiveness.
Those notions are well known in optimization discipline.
Bruno

