Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
Drexel University or The Math Forum.


Matt J
Posts:
4,992
Registered:
11/28/09


Re: convex optimization problem
Posted:
Aug 27, 2012 9:31 AM


Alan_Weiss <aweiss@mathworks.com> wrote in message <k1fpea$e5r$1@newscl01ah.mathworks.com>... > > When using the sqp or interiorpoint algorithms there is no need to set > x(i)>=eps; these algorithms obey bounds at all iterations, and your > extension of the definition to the boundary is fine. =================
I'm not so sure. The function is not differentiable at the the boundary. The directional derivative goes to infinity when approached from the right and has no definition when approached from the left. This violates a lot of textbook assumptions.
A solution might be to make the change of variables
x_i=k_i*exp(z_i)
leading to the reformulated problem
J = summmation(i = 1:N) k_i*exp(z_i)*(z_i*+ c_i)
Subject to : summation(i=1:N) k_i*exp(z_i) = 1
This is now finitely differentiable everywhere. It's true that this breaks convexity, but it is a monotonic change of variables, so the problem should still be unimodal (I think).



