Woody <email@example.com> writes: >The project I am working on involves fitting experimental data to a >model function. The model has multiple parameters, and is non-linear. >My questions involve the proper criterion for optimization of fit when >there are multiple experiments. > >The model function is of the form m(a; p; x,t). with a a vector of >fixed parameters, p the parameters to be fitted, and x and t the >independent variables (assumed to have no error). > >We're using least squares as our measure of best fit for a single >experiment; i e, minimize SUM((m(x_i,t_i) - y_i)^2)/n. This is the >form when all n experimental points (t_i,x_i,y_i) are weighted >equally. When they are not, each point is assigned a weight estimated >as 1/variance, and the denominator replaces n with SUM(w_i). > >When there is more than one experiment to be fit simultaneously with >the same model function, what should the best-fit criterion be? There >are two typical situations: > >1) A small number of either fixed or variable parameters are different >for each experiment, but the remainder are the same. This may occur, >for example, if the initial conditions differ for each experiment, but >the physical phenomena are the same. >
I think in this case the fitting must be done independently from each other, since changing some (fixed) parameters can change the behaviour of the fitfunction drastically. This is true even if it deals with different initial values only.
>2) The scaling of experiments differs. This will be taken into account >by the weighting. This occurs when the same physical phenomena are >measured by two different methods.
One idea would be to scale all data to some common artificial magnitude (1 as my favorite) but more interesting would be to do both: independent fits and a fit of all data in a fashion like variance analysis . This would give you a feeling about the reliability and sensitivity of your fit parameters.