Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.math.independent

Topic: Step Control for Gradient Descent ?
Replies: 5   Last Post: Feb 23, 2009 1:46 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Dave Rudolf

Posts: 37
Registered: 10/3/05
Step Control for Gradient Descent ?
Posted: Feb 18, 2009 4:48 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

Hi all,

I have a constrained global optimization problem to solve. Some
constraints are sometimes based on equality, e.g., a = f(b). Others
are inequality constraints, such as c >= f(d). So, life can be quite
difficult.

We have a naive implementation of a minimizer for this problem. It
simply uses a gradient descent algorithm, i.e.

x[n+1] := dt * grad( f( x[n] ) ) + x[n]

and after each step it manually enforces any constraints that have
been violated, i.e.

a := f(b)
if( c < f(d) ) c := f(d)

To find the global minimum, we just do the above local search on
several places.

Yes, this is a pretty weak method, but it actually works quite well
for our purposes. The main problem is that it is REALLY slow, as you
might guess.

You might think that we could reduce the number of drop points to
speed things up, but in practice, we need relatively few of these drop
points. Like, 5.

However, each run with the gradient descent method could take
thousands of iterations to converge to something reasonable. So, a
better way to speed it up would be to decrease the number of
iterations. At the moment, we are evolving the solution with a
constant step size (dt).

I have heard rumours of ways to derive an adaptive step-size, much
like you can with numerical integrators. However, I haven't seen the
details of how this is done. Can someone point me in the right
direction?

Alternatively. what other method might be useful for doing this kind
of minimization? The shape of the energy field is such that there is
one deep parabolic canyon with a few small craters along the way. Part
of the reason we like our crappy little solver is that we can prevent
it from getting caught in a crater way up the side of the wall by
starting each local search near the estimated bottom of that canyon.
So, if there is a method with similar properties, I would be
especially interested.

Thanks for reading.

Dave



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.