Matt J
Posts:
4,994
Registered:
11/28/09


Re: Nonlinear optimization
Posted:
Mar 6, 2013 9:05 PM


"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <kh8jcc$4qj$1@newscl01ah.mathworks.com>... > > As for LevenberMarquardt, it should still converge under this scheme, though perhaps slowly. Working with F directly would without question be better, but then you have to code it yourself. I have searched for general LevenbergMarquardt routines on the FEX and didn't find them. :( > > Not a generic LM, but rather than reinventing the wheel, one might save some work by changing the specific cost function and the Jacobian calculation of this code > > http://www.mathworks.com/matlabcentral/newsreader/view_thread/281583 > > Admittedly it takes some minimum skill to make the Jacobian calculation correct and fast. ===================
Bear in mind also that generic LM doesn't use the Jacobian of the cost function F(x), but rather the Hessian of F, or equivalently the Jacobian of F's gradient. This not only means a possibly intensive Hessian calculation, but also when you solve the update equation
(Hessian(x)+eye(N)*lambda)*xnew = gradient(x)
you have to choose lambda so that (Hessian(x)+eye(N)*lambda) is positive definite. This can be difficult for nonconvex F.
These considerations make me wonder whether applying LM to F(x) directly really is preferable.

