"Matt J" wrote in message <firstname.lastname@example.org>... > "Bruno Luong" <email@example.com> wrote in message <firstname.lastname@example.org>...
> One can see that the empirical lambda-tuning rules in the original LM papers should in theory be applicable to true Hessians and not require an eig(H) operation. However, it's still easy to imagine that if the algorithm lands in a non-convex region where H is not positive definite, that you might have to solve > > (H+lambda*I)*x=-g > > for several lambda before a descent-direction was found.
Yes and usually the lambda is automatically adjusted (my multiplying by a constant factor) until criteria are fulfilled. Simple check as decrease of objective function will warrant (with probability) the lambda is large enough, even if H is not positive.
That having said, I don't know many people who actually are using the true Hessian in non-linear optimization.