"Bruno Luong" <email@example.com> wrote in message <firstname.lastname@example.org>... > > As for Levenber-Marquardt, it should still converge under this scheme, though perhaps slowly. Working with F directly would without question be better, but then you have to code it yourself. I have searched for general Levenberg-Marquardt routines on the FEX and didn't find them. :-( > > Not a generic LM, but rather than reinventing the wheel, one might save some work by changing the specific cost function and the Jacobian calculation of this code > > http://www.mathworks.com/matlabcentral/newsreader/view_thread/281583 > > Admittedly it takes some minimum skill to make the Jacobian calculation correct and fast. ===================
Bear in mind also that generic LM doesn't use the Jacobian of the cost function F(x), but rather the Hessian of F, or equivalently the Jacobian of F's gradient. This not only means a possibly intensive Hessian calculation, but also when you solve the update equation
(Hessian(x)+eye(N)*lambda)*xnew = gradient(x)
you have to choose lambda so that (Hessian(x)+eye(N)*lambda) is positive definite. This can be difficult for non-convex F.
These considerations make me wonder whether applying LM to F(x) directly really is preferable.