Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.



Re: Improving ANN results
Posted:
Oct 25, 2013 9:56 PM


"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l4a24h$33$1@newscl01ah.mathworks.com>...
%Why not Gui code?
Bewildering no of choices for the newbie. Better to concentrate on the important ones and accept the remaining defaults.
% sir greg....what should i conclude ...which is to use ....delete row 1 of % dataset or not
Greg, not sir greg or sir
Although a MATLAB default function will delete constantrow/zerovariance variables, why add to the confusion by using them in the 1st place???
% sir why u didnt counter weights and sir by giving loop to trials...what % happens...is that not better if v give loop for mse value....... such % that system will train untill the mse value is at its minimum(what v have % given)
Given the number of hidden nodes and a set of initial weights, the training algorithm does the best it can. Therefore, those are the only parameters that need to be changed.
However, to prevent the algorithm from wasting time on insignificant improvements I do use higher values for MSEgoal and MinGrad.
% HI GREG THNX ALOT ....I HAVE FEW Questions regarding the above code MY % QUESTIONS MAY SEEMS STUPID BECAUSE IAM BEGINNER . I HAVE database of % around 80 cross 30 parameters.
Do you mean a 80 x 30 data matrix of variables???
% 1.SIR y u have chose Ntrials parameter for forloop for training % purpose.....can v use for example such that errors=targetsoutputs while % errors~=0 .......... train(net,x)................. some thing like % that.....or while mse~=0 train(net,x) is it right or wrong i dont % knw.......if wrong kindly tell me the reason
train already has a full complement of stopping criteria. However, zero error and zero slope are not practical. That is why I specify my own nonzero goals for MSEgoal and MinGrad
The important goal is to try to maximize performance on NONTRAINING data. Trying to obtain zero error on training data is seldom acheivable without overtraining an overfit net (Nw > Ntrneq). More importantly, past a certain point, reducing training error does not result in the reduction of nontraining error. That is why training is terminated when the error on the nontraining validation set reaches a minimum. % 2. why sir u havent considered weights ,,biases..epochs...etc.. and % learning rate value....trainlm is independent of lr BUT Y then how he % make system learn...from example
The maximum allowable number of epochs is specified. What is there to consider about weights and biases?
What made you ask this question?
From experience, the only parameters I have to change are
1. Time series ID, FD and net.divideFcn 2. MSEgoal and MinGrad 3. Hmin,dH,Hmax 4. Ntrials
% 3.sir y v standardize
To make my life easier. For the thousands of important nets I've designed, means are zero and standard deviations are one. Result? a. Comparison plotting is easier b. Outliers are easy to spot. c. Training is not compromised by sigmoid saturation d. Weight/bias sizes are not affected by widely diverse I/O scaling e. MSE values instantly mean something because the reference MSE is 1.
%4.kindly explain these lines
The only performance measure that makes sense to me is Normalized MSE and R^2 = 1NMSE. For example, a result of MSE = 19.41, by itself, means absolutely nothing. However, if MSE00 = 1941, then NMSE = 0.01 , R^2 = 0.99, and if Ntrneq >> Nw, then I get a warm feeling all over.
> Ntrials = max(10,30/Ntst)
Erorrs are assumed to be zeromean Gaussian distributed. It is common knowledge that Gaussian statistics estimates tend to be reliable when the sample size is at least 30.
So, replace with
Ntrials >= 30/min(Ntrn,Nval,Ntst)
% why v have found medians..means....etc
Understanding of results and confirmation of robustness.
Statistical results are more believable to a sponsor or customer if they are accompanied by error bars and/or confidence levels.
%two more questions regarding the above question
1. > > Note that only 2 of 30 designs have R2tst >= 0.95 !!!
% this thing you have told me after the results. sir did this mean % that our data is not appropiate because 2 out 30 designs were % fine. so i will train it again with other technique.
I would be very surprised if you could obtain results that are significantly better. If this was important you would be forced to obtain more data to obtain more convincing error bars and/or confidence levels..
% 2. what is the problem with default data division which is generated through % advanced script.
You don't have nearly enough data to obtain a convincing default (0.7/0/15/0.15)*8 = 6/1/1 datadivision design model for use with unseen nontraining data. % 3. sit you have used FOR LOOP for Ntrials1:10.sir why dont u have use it with % while loop such while certain best performance or least mse is achieved .... % and it will strat loop from ntrial=1 then 2 then 3 then 4 etc and it will stop when % certain mse or error is achieved.
There is no certainty that a training goal can be reached for a design that will work well on nontraing data.
Hope this helps.
Greg



