On 5/2/2013 11:29 PM, runcyclexcski wrote: > Hi all, > > I am fitting large numbers (millions) of 5x5 matrices of int16 to 2-D gaussians (6 variables). A typical matrix contains a higher-intensity value at (3,3), with intensity decaying towards the periphery (it's a CCD image of a delta function, i.e. a diffraction-limited spot). > > I have great results with fminsearch, it consistently converges to a solution within ~300 iterations. The problem is that it takes too long - about 0.02 s per matrix - which scales up quickly with millions of matrices. I would like to speed this up at least 10 fold (with a method other than running 10 cores at once). > > I tried to run fminunc with the exact same parameters as fminsearch, and I am getting the same (slow) performance. It converges to the same result in 40 iterations, but overall takes the same amount of time per matrix as the 300 of the simplex, i.e. each iteration takes 10x longer than the simplex. > > Would running mmx precomplied code help? Would predefining derivatives help? > > I define my error function as a square of differences: > > C = (X(i)-x0)^2/(2*sx^2); > D = (Y(k)-y0)^2/(2*sy^2); > error = error+(I(i,k)-b-A*exp(-1*(C + D)))^2; > > Thank you in advance!
lsqnnlin or lsqcurvefit would almost certainly be faster. Also, you could probably impose bounds (this might speed things a bit), and get a good initial guess from a mean or mode of your previous fitted values.