Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Image registration: linear vs. cubic interp and objective function minimization
Replies: 4   Last Post: Jul 19, 2012 5:42 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Luca

Posts: 77
Registered: 6/6/12
Re: Image registration: linear vs. cubic interp and objective function minimization
Posted: Jul 19, 2012 5:42 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"Matt J" wrote in message <ju6tur$obm$1@newscl01ah.mathworks.com>...

> > Nevertheless I can see them! If I plot my objective function along one parameter there are some local minima, around integer values of translation. Extremely thin, but present. How deep they are depends on how much I've smoothed my original image.
> ==============
>
> Sounds weird. And it's not intuitive why cubic splines would avoid them
>


I knew an explaination for that! It was pretty simple. I don't remember it know.
Anyway with a totally unsmoothed image (which has frequency content much higher than what Nyquist would like) these local minima are quite deep, when working with linear interpolation, and they disappear with cubic or spline interpolation.
With increasing smoothness these minima tend to disappear.

> You need to learn to use the chain rule in matrix/vector form.
>
> http://en.wikipedia.org/wiki/Chain_rule#The_chain_rule_in_higher_dimensions
>
> The gradient of SSD has a fairly simple form when you do
>
> grad(norm(Tx-y)^2) =Jacobian(Tx)*(Tx-y)


I know the chain rule... But... If I have to compute the Jacobian analitically I end up doing it on paper. And according to how my Tx is... that might be quite intricate.
And I'm not as fast to make derivatives as I was during high school and first year of univerity, where I used them every day.
The last time I computed the gradient for a minimizator it was of the ssd between a vector and a gaussian with its mu and sigma described by some functions of my variables. The solution was more than a line long!!
The computation of the gradient was straightforward, this solution is something very easy to compute, but still it was really complicated! (and it took a day to find it and have all the "2" and the minus correct!)


-----
> You may have to do this for the Hessian as well, if the algorithm you're using requires a Hessian computation. Also, you should be aware that griddedInterpolant is faster than interpn. Also, imtransform allows you to define your own resampling operations...
===
Ok, thank you!



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.