"Matt J" wrote in message <email@example.com>...
> > There's something strange and inconsistent about what you've been saying.
Well... It's not in what I'm saying, it's in what's happening! I don't know where to bang my head anymore!
----- >At the beginning of the paragraph, you say that the optimization worked correctly without scaling the parameters. Later, you say that without the scaling, some parameters get stuck at their initial values (but somehow that's okay). Surely, it can't be correct behavior that some parameters are untouched by the minimizer. The result can only be a solution if the initial guess for those parameters was already optimal. ======= To be precise I meant that: "the results that I get without scaling are what I have reasons to believe to be the correct solution" (I don't have a ground truth to compare to).
------ > In any case, all of this scaling you've been attempting is expected to make little improvement. As I said earlier, FMINUNC already does internally the very same scaling you're attempting now. That's what the Hessian computations are for. ==== And here my doubts come in. The behaviour of fminunc changes dramaticaly when I scale my parameters. For once the solution is for sure wrong, and also the minimizer stops in 3 or 4 iterations instead of ~200. Even with TolX set to be extremely small. Setting it to smaller values at best gives me and additional iteration with a ridiculus step-size. Than you said that fminunc scales using the Hessian. But you cited the example of x^2+1e4y^2, that is difficult to minimize along the gradient direction. That should not happen if the Hessian scaled everything correctly, correct? (and what I'm doing with scaling is making the gradient of similar magnitudes in each direction)