Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: How to define Hessian for fminunc.
Replies: 3   Last Post: Oct 15, 2013 8:22 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Alan Weiss

Posts: 1,208
Registered: 11/27/08
Re: How to define Hessian for fminunc.
Posted: Oct 14, 2013 9:58 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

On 10/12/2013 8:20 PM, Céldor wrote:
> First of all, apologies for asking a trivial question but I am just
> learning multivariable calculus and optimization toolbox in
> matlab--optimization as well :)
>
> I was testing my understanding of using optimization toolbox in MATLAB
> on a simple 2D function and tried to find some local minima but, for
> particular points, MATLAB's FMINUNC function returns the exact
> starting points with the following comment: "Initial point is a local
> minimum." I have provided both a gradient and a 'user-supplied'
> Hessian. I don't know whether it is MATLAB or me. Perhaps I made
> mistake in defining Hessian but I think all is correct.
>
> For random starting point, the function always give a proper (the
> closest) minimum. However for certain points, at exact maxima, the
> fminunc returns exactly the same point with above statement. Should I
> use different optimization algorithm or it is simply the MATLAB thing
> I need to accept? I thought providing the Hessian would sort it out.
> The Hessian is always a negative-defined matrix at all problematic
> points and that should indicate that the point is a maximum! I tested
> the Hessian via eig(hessian) to make sure Hessian was negative-defined
> which was true. It looks like fminunc does not take Hessian into
> account at all. What should I do to make fminunc work with hessians?
>

**snip**

When you start at a point where the gradient of the objective is zero,
fminunc sees that the gradient is zero and realizes that it is at a
local extremum (min, max, or saddle point). It stops because the
first-order optimality measure is zero (the norm of the gradient). It
does not check second-order conditions in this case.

I hope this helps. You can read about the fminunc algorithms here:
http://www.mathworks.com/help/optim/ug/unconstrained-nonlinear-optimization-algorithms.html

Alan Weiss
MATLAB mathematical toolbox documentation



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.