Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Improving ANN results
Replies: 16   Last Post: Nov 13, 2013 9:10 AM

 Search Thread: Advanced Search

 Messages: [ Previous | Next ]
 Greg Heath Posts: 6,387 Registered: 12/7/04
Re: Improving ANN results
Posted: Oct 25, 2013 9:56 PM
 Plain Text Reply

"chaudhry " <bilal_zafar9@yahoo.com> wrote in message <l4a24h\$33\$1@newscl01ah.mathworks.com>...

%Why not Gui code?

Bewildering no of choices for the newbie. Better to concentrate on the important
ones and accept the remaining defaults.

% sir greg....what should i conclude ...which is to use ....delete row 1 of
% dataset or not

Greg, not sir greg or sir

Although a MATLAB default function will delete constant-row/zero-variance
variables, why add to the confusion by using them in the 1st place???

% sir why u didnt counter weights and sir by giving loop to trials...what
% happens...is that not better if v give loop for mse value....... such
% that system will train untill the mse value is at its minimum(what v have
% given)

Given the number of hidden nodes and a set of initial weights, the training
algorithm does the best it can. Therefore, those are the only parameters that
need to be changed.

However, to prevent the algorithm from wasting time on insignificant improvements
I do use higher values for MSEgoal and MinGrad.

% HI GREG THNX ALOT ....I HAVE FEW Questions regarding the above code MY
% QUESTIONS MAY SEEMS STUPID BECAUSE IAM BEGINNER . I HAVE database of
% around 80 cross 30 parameters.

Do you mean a 80 x 30 data matrix of variables???

% 1.SIR y u have chose Ntrials parameter for forloop for training
% purpose.....can v use for example such that errors=targets-outputs while
% errors~=0 .......... train(net,x)................. some thing like
% that.....or while mse~=0 train(net,x) is it right or wrong i dont
% knw.......if wrong kindly tell me the reason

train already has a full complement of stopping criteria. However, zero error
and zero slope are not practical. That is why I specify my own nonzero goals
for MSEgoal and MinGrad

The important goal is to try to maximize performance on NONTRAINING data.
Trying to obtain zero error on training data is seldom acheivable without
overtraining an overfit net (Nw > Ntrneq). More importantly, past a certain point,
reducing training error does not result in the reduction of nontraining error.
That is why training is terminated when the error on the non-training validation
set reaches a minimum.

% 2. why sir u havent considered weights ,,biases..epochs...etc.. and
% learning rate value....trainlm is independent of lr BUT Y then how he
% make system learn...from example

The maximum allowable number of epochs is specified.
What is there to consider about weights and biases?

What made you ask this question?

From experience, the only parameters I have to change are

1. Time series ID, FD and net.divideFcn
2. MSEgoal and MinGrad
3. Hmin,dH,Hmax
4. Ntrials

% 3.sir y v standardize

To make my life easier. For the thousands of important nets I've
designed, means are zero and standard deviations are one. Result?
a. Comparison plotting is easier
b. Outliers are easy to spot.
c. Training is not compromised by sigmoid saturation
d. Weight/bias sizes are not affected by widely diverse I/O scaling
e. MSE values instantly mean something because the reference
MSE is 1.

%4.kindly explain these lines

The only performance measure that makes sense to me is Normalized
MSE and R^2 = 1-NMSE. For example, a result of MSE = 19.41, by itself,
means absolutely nothing. However, if MSE00 = 1941, then NMSE = 0.01 ,
R^2 = 0.99, and if Ntrneq >> Nw, then I get a warm feeling all over.

> Ntrials = max(10,30/Ntst)

Erorrs are assumed to be zero-mean Gaussian distributed. It is common
knowledge that Gaussian statistics estimates tend to be reliable when
the sample size is at least 30.

So, replace with

Ntrials >= 30/min(Ntrn,Nval,Ntst)

% why v have found medians..means....etc

Understanding of results and confirmation of robustness.

Statistical results are more believable to a sponsor or customer if they are
accompanied by error bars and/or confidence levels.

%two more questions regarding the above question

1. > > Note that only 2 of 30 designs have R2tst >= 0.95 !!!

% this thing you have told me after the results. sir did this mean
% that our data is not appropiate because 2 out 30 designs were
% fine. so i will train it again with other technique.

I would be very surprised if you could obtain results that are significantly
better. If this was important you would be forced to obtain more data
to obtain more convincing error bars and/or confidence levels..

% 2. what is the problem with default data division which is generated through
% advanced script.

You don't have nearly enough data to obtain a convincing default
(0.7/0/15/0.15)*8 = 6/1/1 datadivision design model for use with unseen
nontraining data.

% 3. sit you have used FOR LOOP for Ntrials1:10.sir why dont u have use it with
% while loop such while certain best performance or least mse is achieved ....
% and it will strat loop from ntrial=1 then 2 then 3 then 4 etc and it will stop when
% certain mse or error is achieved.

There is no certainty that a training goal can be reached for a design that
will work well on nontraing data.

Hope this helps.

Greg

Date Subject Author
10/13/13 chaudhry
10/15/13 Greg Heath
10/15/13 Greg Heath
10/17/13 chaudhry
10/17/13 chaudhry
10/18/13 Greg Heath
10/17/13 chaudhry
10/17/13 chaudhry
10/23/13 chaudhry
10/23/13 chaudhry
10/23/13 chaudhry
10/25/13 Greg Heath
11/3/13 chaudhry
11/3/13 Greg Heath
11/6/13 Greg Heath
11/6/13 Greg Heath
11/13/13 chaudhry

© The Math Forum at NCTM 1994-2018. All Rights Reserved.