Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Topic: Non-linear optimization
Replies: 32   Last Post: Mar 8, 2013 2:22 AM

 Search Thread: Advanced Search

 Messages: [ Previous | Next ]
 Bruno Luong Posts: 9,822 Registered: 7/26/08
Re: Non-linear optimization
Posted: Mar 7, 2013 4:12 PM
 Plain Text Reply

"Matt J" wrote in message <khaui3\$efe\$1@newscl01ah.mathworks.com>...

>
> I'm not sure what link with quasi-Newton that you're referring to. If you're saying that
>
> (H+lambda*I) x=gradient
>
> is an LM generalization of Newton's method,

Quasi-Newton -> Replace Hessian by an appoximation of it, usually based from the first derivative, such as BFGS formula or H ~ J'*J in the least square cost function, where J is the Jacobian of the model.

>then yes, I'm sure Newton-LM would converge faster, however each iteration looks costly. You would have to know the minimum eigenvalue of H in order to make (H+lambda*I) positive definite.

Both BFGS and J'*J approximation provide quasi convex quadratic approximation. Therefore there is no need to bother with such detail about positiveness.

Those notions are well known in optimization discipline.

Bruno

Date Subject Author
3/4/13 Toan Cao
3/4/13 Steven Lord
3/4/13 Toan Cao
3/5/13 Steven Lord
3/5/13 Toan Cao
3/6/13 Matt J
3/6/13 Matt J
3/6/13 Toan Cao
3/6/13 Matt J
3/4/13 Matt J
3/4/13 Toan Cao
3/5/13 Matt J
3/5/13 Bruno Luong
3/6/13 Matt J
3/6/13 Bruno Luong
3/6/13 Matt J
3/6/13 Bruno Luong
3/6/13 Matt J
3/6/13 Bruno Luong
3/6/13 Matt J
3/7/13 Bruno Luong
3/7/13 Matt J
3/7/13 Bruno Luong
3/7/13 Matt J
3/7/13 Bruno Luong
3/7/13 Matt J
3/7/13 Bruno Luong
3/8/13 Matt J
3/8/13 Bruno Luong
3/7/13 Toan Cao
3/7/13 Matt J
3/7/13 Toan Cao
3/7/13 Matt J

© The Math Forum at NCTM 1994-2017. All Rights Reserved.