Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Neural Network -- Incremental Training
Replies: 8   Last Post: Jun 5, 2014 11:41 PM

 Messages: [ Previous | Next ]
 Greg Heath Posts: 6,387 Registered: 12/7/04
Re: Neural Network -- Incremental Training
Posted: Jun 5, 2014 11:41 PM

% I have studied the newrb algorithm, and my understanding of it is as follows:
%
% 1. an empty network is created
% a. the weight, net input and transfer functions are defined
% b. the architecture of the network is defined
% c. the design of the network is invoked
% 2. during the initial design stage
% a. the radial basis layer outputs are calculated
% b. the correlation coefficients between the network outputs and the target outputs are calculated
% c. the sample with the most "error" is picked from P
% d. the first neuron is being calculated from the picked sample and the network inputs

network 'parameters'

% e. MSE value between the target outputs and the calculated neuron outputs is calculated
% f. the struct 'tr' is created, which holds the epochs and MSE values for each epoch (with this it
% indicates the performance of the network)
% 3. during the iterative stage of the design
% a. number of iterations is the maximum number of neurons set earlier
% b. again we calculate the correlation coefficients between the network outputs and the
% target outputs
% c. from the remainder of the network inputs we again pick the one with the most "error"
% d. we calculate the next neuron from the picked sample and the network inputs
% e. MSE value between the target outputs and the calculated neuron outputs is calculated
% f. the struct 'tr' is expanded with new epochs and MSEs
% g. if the current MSE is lower than the set goal, break the for loop and end training of
% further neurons
% 4. end of algorithm
% a. the values of w1 (weights in the first layer), b1 (biases of the first layer), w2 (weights of
% the second layer) and b2 (biases of the second layer) are outputs of the algorithm
% b. tr is also an output, but it is not saved in the net object (while w1, b1, w2 and b2 are)
% c. the network parameters are set to the outputs of the design stage
% d. the network is initialized with these values
%
% Could you please tell me have I made an error somewhere in my understanding?
%
% I would also like to ask, which variables did you mean when you mentioned the Calibration Set?

Sorry, that was a way I kept MLPs from "forgetting". It is not applicable here.

% I would like to test and develop this algorithm on a XOR example, where I would first train a
% RBFNN with 3 samples, initialize the network and save it and then load the same network
% and add the 4. sample to it. Would this be wise, or should I try with a different starting example?

Standardize inputs and regression outputs to zero-mean and unit variance
Use 0-1 unit vector outputs for classifiers
Define MSE00 = mean(var(t',1)) % average target variance
A reasonable goal is MSEgoal = 0.01*MSE00 % R^2 = 0.99
nodes is minimized

Search for my examples

greg newrb

You can try your example and end up with 4 neurons because none of the
4 is 'like' the other 3

Then try the simple example in the documentation

help newrb

Then try the simple_cluster problem

help nndatasets

Hope this helps.

The main fault of this network is that you cannot define
a set of initial clusters.

ope this helps.

Date Subject Author
4/19/06 George Xu
4/22/06 Greg Heath
4/24/06 George Xu
5/28/14 Marko Kolarek
5/29/14 Greg Heath
5/30/14 Marko Kolarek
5/31/14 Greg Heath
6/5/14 Marko Kolarek
6/5/14 Greg Heath