I have a constrained global optimization problem to solve. Some constraints are sometimes based on equality, e.g., a = f(b). Others are inequality constraints, such as c >= f(d). So, life can be quite difficult.
We have a naive implementation of a minimizer for this problem. It simply uses a gradient descent algorithm, i.e.
x[n+1] := dt * grad( f( x[n] ) ) + x[n]
and after each step it manually enforces any constraints that have been violated, i.e.
a := f(b) if( c < f(d) ) c := f(d)
To find the global minimum, we just do the above local search on several places.
Yes, this is a pretty weak method, but it actually works quite well for our purposes. The main problem is that it is REALLY slow, as you might guess.
You might think that we could reduce the number of drop points to speed things up, but in practice, we need relatively few of these drop points. Like, 5.
However, each run with the gradient descent method could take thousands of iterations to converge to something reasonable. So, a better way to speed it up would be to decrease the number of iterations. At the moment, we are evolving the solution with a constant step size (dt).
I have heard rumours of ways to derive an adaptive step-size, much like you can with numerical integrators. However, I haven't seen the details of how this is done. Can someone point me in the right direction?
Alternatively. what other method might be useful for doing this kind of minimization? The shape of the energy field is such that there is one deep parabolic canyon with a few small craters along the way. Part of the reason we like our crappy little solver is that we can prevent it from getting caught in a crater way up the side of the wall by starting each local search near the estimated bottom of that canyon. So, if there is a method with similar properties, I would be especially interested.