Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Neural network in matlab
Replies: 2   Last Post: May 2, 2013 12:48 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 5,950
Registered: 12/7/04
Re: Neural network in matlab
Posted: May 2, 2013 12:48 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"Babar Zaman " <ravian1011@yahoo.com> wrote in message
<klqliq$ndl$1@newscl01ah.mathworks.com>...
> Hi please any one can guide me the method to fix the desired value when we are going to train ANN with backpropogation method or is there any method to selection desired value on the basis of train data ?
> Thanks


I have had satisfactory results with the following. See Wikipedia or a statistics reference for discussions of " Rsquare(R^2) " the " Coefficient of determination "

net.trainParam.min_grad = MSEtrn00/200;
net.trainParam.goal = max(0,0.01*Ndof*MSEtrn00a/Ntrneq);

%NAIVE CONSTANT OUTPUT MODEL ("00")

[ O Ntrn ] = size(ttrn)
Ntrneq = prod(size(ttrn)) = Ntrn*O % No. of training equations
meanttrn2 = mean(ttrn',2); % Constant output (regardless of input)
Nw00 = numel(meanttrn2) = O % No. of estimated unknowns
Ndof00 = Ntrneq-Nw00 % No. of estimation-degrees-of-freedom
= (Ntrn-1)*O

ytrn00 = repmat(meantrn2,1,Ntrn);
yval00 = repmat(meantrn2,1,Nval);
ytst00 = repmat(meantrn2,1,Ntst);

MSEtrn00 = sse(ttrn-ytrn00)/Ntrneq % Biased: Same data for training and testing
= mse(ttrn-ytrn00)
= mean(var(ttrn',1)) % Biased ( Ntrn divisor)

MSEtrn00a = sse(ttrn-ytrn00)/Ndof00 % DOF "a"djusted
= Ntrneq*MSEtrn00/Ndof00
= Ntrn*mean(var(ttrn',1))/(Ntrn-1)
= mean(var(ttrn')) % Unbiased: (Ntrn-1) divisor

% ADVANCED MODELS ( e.g., NNs )
% I-H-O MLPNN

[ I Ntrn ] = size(xtrn)
Nw = (I+1)*H+(H+1)*O
Ndof = Ndof - Nw
MSEtrn = sse(ttrn-ytrn)/Ntrneq % Biased: Same data for training and testing
= mse(ttrn-ytrn)
MSEtrna = sse(ttrn-ytrn)/Ndof % DOF "a"djusted
= Ntrneq*MSEtrn/Ndof

%COEFFICIENT OF DETERMINATION (Rsquare statistic, R^2; See Wikipedia )
% Fraction of Target Variance "Explained" by the Advanced Model ( 0 <= R^2 <= 1)

NMSEtrn = MSEtrn/MSEtrn00 % Normalized MSE
R2trn = 1 - NMSEtrn % Rsquare (R^2) statistic
NMSEtrna = MSEtra/MSEtrn00a
R2trna = 1 - MSEtrna/MSEtrn00a % "A"djusted" Rsquare (R^2) statistic

% Ndof > 0 MSE Training Goal : R2trna = 0.99 (My choice)

MSEtrngoala = 0.01*MSEtrn00a
MSEtrngoal = 0.01*Ndof*MSEtrn00a/Ntrneq
= 0.01*Ndof*mean(var(ttrn'))/Ntrneq

% ALTERNATE TECHNIQUES for arbitrary Ndof
% Nw > Ntrneq <=> MORE UNKNOWNS THAN TRAINING EQUATIONS <==> Ndof < 0
% Nw > Ntrneq <==> H > Hub (upper bound)

1. H < Hub = -1 + ceil( ( Ntrneq -O ) / (I + O + 1)

2. Validation set stopping: Stop training if mse(tval-yval) minimizes before mse(ttrn-ytrn)

3. Regularization of Goal Minimization Objective (help/doc mse and help/doc mae)

a. REGgoal1 = MSE + alpha*MSW % MSW = mean(sum (squared weights))

b. REGgoal2 = MAE + beta *MAW % MAW = mean(sum(abs(weights)))

4. Bayesian Regularization training function (automatically optimizes alpha or beta)

net.trainFcn = 'trainbr'; % help/doc trainbr

Hope this helps.

Greg



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.