Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.



Re: Neural network in matlab
Posted:
May 2, 2013 12:48 AM


"Babar Zaman " <ravian1011@yahoo.com> wrote in message <klqliq$ndl$1@newscl01ah.mathworks.com>... > Hi please any one can guide me the method to fix the desired value when we are going to train ANN with backpropogation method or is there any method to selection desired value on the basis of train data ? > Thanks
I have had satisfactory results with the following. See Wikipedia or a statistics reference for discussions of " Rsquare(R^2) " the " Coefficient of determination "
net.trainParam.min_grad = MSEtrn00/200; net.trainParam.goal = max(0,0.01*Ndof*MSEtrn00a/Ntrneq);
%NAIVE CONSTANT OUTPUT MODEL ("00")
[ O Ntrn ] = size(ttrn) Ntrneq = prod(size(ttrn)) = Ntrn*O % No. of training equations meanttrn2 = mean(ttrn',2); % Constant output (regardless of input) Nw00 = numel(meanttrn2) = O % No. of estimated unknowns Ndof00 = NtrneqNw00 % No. of estimationdegreesoffreedom = (Ntrn1)*O ytrn00 = repmat(meantrn2,1,Ntrn); yval00 = repmat(meantrn2,1,Nval); ytst00 = repmat(meantrn2,1,Ntst);
MSEtrn00 = sse(ttrnytrn00)/Ntrneq % Biased: Same data for training and testing = mse(ttrnytrn00) = mean(var(ttrn',1)) % Biased ( Ntrn divisor) MSEtrn00a = sse(ttrnytrn00)/Ndof00 % DOF "a"djusted = Ntrneq*MSEtrn00/Ndof00 = Ntrn*mean(var(ttrn',1))/(Ntrn1) = mean(var(ttrn')) % Unbiased: (Ntrn1) divisor
% ADVANCED MODELS ( e.g., NNs ) % IHO MLPNN
[ I Ntrn ] = size(xtrn) Nw = (I+1)*H+(H+1)*O Ndof = Ndof  Nw MSEtrn = sse(ttrnytrn)/Ntrneq % Biased: Same data for training and testing = mse(ttrnytrn) MSEtrna = sse(ttrnytrn)/Ndof % DOF "a"djusted = Ntrneq*MSEtrn/Ndof %COEFFICIENT OF DETERMINATION (Rsquare statistic, R^2; See Wikipedia ) % Fraction of Target Variance "Explained" by the Advanced Model ( 0 <= R^2 <= 1)
NMSEtrn = MSEtrn/MSEtrn00 % Normalized MSE R2trn = 1  NMSEtrn % Rsquare (R^2) statistic NMSEtrna = MSEtra/MSEtrn00a R2trna = 1  MSEtrna/MSEtrn00a % "A"djusted" Rsquare (R^2) statistic
% Ndof > 0 MSE Training Goal : R2trna = 0.99 (My choice)
MSEtrngoala = 0.01*MSEtrn00a MSEtrngoal = 0.01*Ndof*MSEtrn00a/Ntrneq = 0.01*Ndof*mean(var(ttrn'))/Ntrneq
% ALTERNATE TECHNIQUES for arbitrary Ndof % Nw > Ntrneq <=> MORE UNKNOWNS THAN TRAINING EQUATIONS <==> Ndof < 0 % Nw > Ntrneq <==> H > Hub (upper bound)
1. H < Hub = 1 + ceil( ( Ntrneq O ) / (I + O + 1)
2. Validation set stopping: Stop training if mse(tvalyval) minimizes before mse(ttrnytrn)
3. Regularization of Goal Minimization Objective (help/doc mse and help/doc mae)
a. REGgoal1 = MSE + alpha*MSW % MSW = mean(sum (squared weights))
b. REGgoal2 = MAE + beta *MAW % MAW = mean(sum(abs(weights))) 4. Bayesian Regularization training function (automatically optimizes alpha or beta)
net.trainFcn = 'trainbr'; % help/doc trainbr Hope this helps. Greg



