Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Neural Network -- Incremental Training
Replies: 8   Last Post: Jun 5, 2014 11:41 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 5,923
Registered: 12/7/04
Re: Neural Network -- Incremental Training
Posted: Jun 5, 2014 11:41 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply


% I have studied the newrb algorithm, and my understanding of it is as follows:
%
% 1. an empty network is created
% a. the weight, net input and transfer functions are defined
% b. the architecture of the network is defined
% c. the design of the network is invoked
% 2. during the initial design stage
% a. the radial basis layer outputs are calculated
% b. the correlation coefficients between the network outputs and the target outputs are calculated
% c. the sample with the most "error" is picked from P
% d. the first neuron is being calculated from the picked sample and the network inputs

network 'parameters'

% e. MSE value between the target outputs and the calculated neuron outputs is calculated
% f. the struct 'tr' is created, which holds the epochs and MSE values for each epoch (with this it
% indicates the performance of the network)
% 3. during the iterative stage of the design
% a. number of iterations is the maximum number of neurons set earlier
% b. again we calculate the correlation coefficients between the network outputs and the
% target outputs
% c. from the remainder of the network inputs we again pick the one with the most "error"
% d. we calculate the next neuron from the picked sample and the network inputs
% e. MSE value between the target outputs and the calculated neuron outputs is calculated
% f. the struct 'tr' is expanded with new epochs and MSEs
% g. if the current MSE is lower than the set goal, break the for loop and end training of
% further neurons
% 4. end of algorithm
% a. the values of w1 (weights in the first layer), b1 (biases of the first layer), w2 (weights of
% the second layer) and b2 (biases of the second layer) are outputs of the algorithm
% b. tr is also an output, but it is not saved in the net object (while w1, b1, w2 and b2 are)
% c. the network parameters are set to the outputs of the design stage
% d. the network is initialized with these values
%
% Could you please tell me have I made an error somewhere in my understanding?
%
% I would also like to ask, which variables did you mean when you mentioned the Calibration Set?

Sorry, that was a way I kept MLPs from "forgetting". It is not applicable here.

% I would like to test and develop this algorithm on a XOR example, where I would first train a
% RBFNN with 3 samples, initialize the network and save it and then load the same network
% and add the 4. sample to it. Would this be wise, or should I try with a different starting example?

Standardize inputs and regression outputs to zero-mean and unit variance
Use 0-1 unit vector outputs for classifiers
Define MSE00 = mean(var(t',1)) % average target variance
A reasonable goal is MSEgoal = 0.01*MSE00 % R^2 = 0.99
Start with spread = 1 and change by powers of 2 until number of hidden
nodes is minimized

net = newrb( x,t,MSEgoal,spread);
Search for my examples

greg newrb

You can try your example and end up with 4 neurons because none of the
4 is 'like' the other 3

Then try the simple example in the documentation

help newrb

Then try the simple_cluster problem

help nndatasets

Hope this helps.

The main fault of this network is that you cannot define
a set of initial clusters.

ope this helps.



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.