"Matt J" wrote in message <firstname.lastname@example.org>...
> > Nevertheless I can see them! If I plot my objective function along one parameter there are some local minima, around integer values of translation. Extremely thin, but present. How deep they are depends on how much I've smoothed my original image. > ============== > > Sounds weird. And it's not intuitive why cubic splines would avoid them >
I knew an explaination for that! It was pretty simple. I don't remember it know. Anyway with a totally unsmoothed image (which has frequency content much higher than what Nyquist would like) these local minima are quite deep, when working with linear interpolation, and they disappear with cubic or spline interpolation. With increasing smoothness these minima tend to disappear.
I know the chain rule... But... If I have to compute the Jacobian analitically I end up doing it on paper. And according to how my Tx is... that might be quite intricate. And I'm not as fast to make derivatives as I was during high school and first year of univerity, where I used them every day. The last time I computed the gradient for a minimizator it was of the ssd between a vector and a gaussian with its mu and sigma described by some functions of my variables. The solution was more than a line long!! The computation of the gradient was straightforward, this solution is something very easy to compute, but still it was really complicated! (and it took a day to find it and have all the "2" and the minus correct!)
----- > You may have to do this for the Hessian as well, if the algorithm you're using requires a Hessian computation. Also, you should be aware that griddedInterpolant is faster than interpn. Also, imtransform allows you to define your own resampling operations... === Ok, thank you!