On Feb 18, 6:17 pm, Ray Vickson <RGVick...@shaw.ca> wrote: > On Feb 18, 5:34 pm, dave.rud...@usask.ca wrote: > > > > > On Feb 18, 3:31 pm, rgvick...@gmail.com wrote: > > > > Why am I making an issue of this? Well, there are two aspects to your > > > problem: (1) getting a local optimum in reasonable time and with > > > reasonable accuracy; and (2) getting a global optimum. Let's just look > > > at (1) for the moment. It has long been known through examples that > > > simple gradient searches (of the type you seem to be using) are > > > dangerous: you can have convergence to a NON-OPTIMAL point. > > > What do you mean non-optimal? Of course it can converge on a local > > minimum, but that is a problem with any local method, as far as I > > understand. > > By non-optimal, I mean not even a local minimum, even in a "convex" > problem where any local minimum is automatically a global minimum. The > successive points can get jammed up along a constraint, but at a point > that is very far from satisfying optimality conditions. >
Ah, I can see how penalty methods can help avoid this. Thanks.