Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Non-linear optimization
Replies: 32   Last Post: Mar 8, 2013 2:22 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Matt J

Posts: 4,994
Registered: 11/28/09
Re: Non-linear optimization
Posted: Mar 7, 2013 3:50 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <kh9it4$qg1$1@newscl01ah.mathworks.com>...
> "Matt J" wrote in message <kh8skk$s6c$1@newscl01ah.mathworks.com>...
>

> > Bear in mind also that generic LM doesn't use the Jacobian of the cost function F(x), but rather the Hessian of F, or equivalently the Jacobian of F's gradient. This not only means a possibly intensive Hessian calculation, but also when you solve the update equation
>
> What you pointed out is the difference between Quasi-Newton and Newton. True both can be implemented with LM.
>
> In practice quasi-Newton is quasi sufficient.

======================

I'm not sure what link with quasi-Newton that you're referring to. If you're saying that

(H+lambda*I) x=gradient

is an LM generalization of Newton's method, then yes, I'm sure Newton-LM would converge faster, however each iteration looks costly. You would have to know the minimum eigenvalue of H in order to make (H+lambda*I) positive definite.





Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.