Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Non-linear optimization
Replies: 32   Last Post: Mar 8, 2013 2:22 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Matt J

Posts: 4,996
Registered: 11/28/09
Re: Non-linear optimization
Posted: Mar 6, 2013 3:00 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"Toan Cao" <toancv3010@gmail.com> wrote in message <kh84cj$ge4$1@newscl01ah.mathworks.com>...
>
> Hi Matt J,
> For my case, each point has its own transformation described by a rotation matrix and a translation vector. Actually, my cost function have another term (third term) which constrains the movement of neighbor points to be smooth (equation of third term is complex, it is not easy for me to write with simple texts).

===================

Even with a 3rd smoothing term for neighboring points, it makes no sense that you would apply both a rotation and translation to a point. A translation is enough to specify the movement of a point to a different location. There's no reason to give the movement of the point 6 degrees of freedom when 3 are enough, and redundant parameters would ill-condition the optimization.


> I read a paper in finding minimum of this cost function. It mentioned that it applied Levenberg-Marquardt to obtain the minimum. Following this direction but i am now stuck in finding Jacobian for f(x), (where F(x) =f(x)'.f(x) ).
> I think you are an expert in math, i wish you can give me some suggestions.

=============

All of the cost function terms that you've shown us so far are square terms. Unless the 3rd term is not of this form, it's not clear what problem you have in writing your cost function as F(x) =f(x)'.f(x).

If the third term is not a sum of squares, we need to see it in order to know how to deal with it. I already gave you a suggestion, requiring a lower bound f_low (not necessarily a tight bound). Not showing us the full cost function makes it hard to advise you how to find this lower bound.

Also, as Bruno hinted, Levenberg-Marquardt is actually applicable to any twice differentiable cost function, not just those that are sums of squares. The parameter update step from x_n to x_{n+1} would be a solution to

(Hessian(x_n) +lambda eye(N) )*x_{n+1}=-gradient(x_n)

However, you would have to code that yourself, including the adaptive selection of the lambda parameter.



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.