Matt J
Posts:
4,994
Registered:
11/28/09


Re: Nonlinear optimization
Posted:
Mar 8, 2013 12:18 AM


"Bruno Luong" <b.luong@fogale.findmycountry> wrote in message <khb5mn$7bg$1@newscl01ah.mathworks.com>... > "Matt J" wrote in message <khb4uc$55k$1@newscl01ah.mathworks.com>... > > > > > I don't see how that can be. For nonconvex functions and nonposdef Hessians, an empirically chosen lambda could easily leave H+lambda*I singular, or at least, not positive definite and therefore nondescending. You would have to choose lambda>=min(eig(H)) to be sure that didn't happen, and that would require an eigenanalysis of H. > > The strategy and rules to chose lambda is given by LM algorithm. You can check the textbook that explains all the details. > > Again all that is well known. ===================
I've gone back to 2 textbooks now, Bertsekas and Nocedal&Wright. They both deal with LM strictly in the context of GaussNewton approximations for the Hessian in nonlinear least squares. I don't get the sense that LM for true Hessians has been explored all that extensively.
Neither book, incidentally, spells out the lambdatuning procedure in great detail. Nocedal and Wright in fact, offer a nonempirical alternative, derived from trust region ideas. Again, though, that's applicable only to GaussNewton LM.
One can see that the empirical lambdatuning rules in the original LM papers should in theory be applicable to true Hessians and not require an eig(H) operation. However, it's still easy to imagine that if the algorithm lands in a nonconvex region where H is not positive definite, that you might have to solve
(H+lambda*I)*x=g
for several lambda before a descentdirection was found.

