Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Training multiple data for a single feedforwardnet
Replies: 19   Last Post: Dec 1, 2012 6:14 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 5,931
Registered: 12/7/04
Re: Training multiple data for a single feedforwardnet
Posted: Nov 3, 2012 7:40 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"Carlos Aragon" wrote in message <k742iq$il8$1@newscl01ah.mathworks.com>...
> "Greg Heath" <heath@alumni.brown.edu> wrote in message <k6vg6s$36r$1@newscl01ah.mathworks.com>...
> > "Carlos Aragon" wrote in message <k6ut1o$1mo$1@newscl01ah.mathworks.com>...
> > > "Greg Heath" <heath@alumni.brown.edu> wrote in message <k6rgsg$sp3$1@newscl01ah.mathworks.com>...
> > > > PLEASE, PLEASE DO NOT TOP POST!!!
> > > >

> > > > > The best is to use a modication of NEWRB that allows the input of an initial
> > > > > > hidden layer. Then
> > > > > >
> > > > > > 1. After training with set1, use those weights as initial weights for training with set2 + set1.

> > > >
> > > > or, if you are lucky
> > > >

> > > > > > 2. After training with set1, use those weights as initial weights for training with set2 and a
> > "characteristic subset" of set1. The drawback is how to define that characteristic.
> > > > > >
> > > > > > The reason this works is that each hidden node basis function has local region of influence

> > and a 1-to-1 correspondence with a previous worst classified training vector.
----SNIP
> > > >
> > > > Then simultaneously train on samples or characteristic exemplars from all 14.

> > >
> > > > If all of the data is not available at once, do it in stages.
> > >
> > > I have all the training and test data, but i dont know how could i do to train 14

> > training vectors and then validate it with just 1 set to check if the neural net is generalizing well.

Train on a subsample of all 14 data sets.

> > > Tying to be clear about wat i'm doing. here is the code:
> > >
> > > ia=linear_train_1(1:5001,4);
> > > w=linear_train_1(1:5001,5);
> > > tq=linear_train_1(1:5001,2);
> > > T1=[198:0.000799840032:202]; % Voltage is between 198V and 202V
> > > iateste1=ia_lin_1(1:5001,4);
> > > wteste1=ia_lin_1(1:5001,5);

> >
> > Seems finely spaced. Do you reallyy need this much data? See below.
> >

> > > P=[T1;ia';w']; % This is the training vector that in this case, trains just 1 set of data.
> > > T=[tq']; I want my neural net to recognize 14 samples of [T1;ia;w']. T1 is fix but

> > 'ia'' and 'w'' varies according to the load equation i'm changing on my motor model. The
> > question is How could i train it to recognize those 14 samples? If i make 'Ia'' and 'w'' a
> > matrix of 14 different currents and speed, this neural net do not allow me to test a simple

> > >vector like is below
> >
> > You need to test matrices not single vectors..
> >

> > > net=feedforwardnet([5 25],'trainbr');
> >
> > Why 2 hidden layers??? Why H =25 ?? Why 'trainbr?
> >trainbr: i have to use bayesien-regulation


You did not answer WHY you think you need to use trainbr !

> > > net.trainParam.goal = 0.005; %error
> >
> > Why?

> defined goal.

But WHY is it the number 0.005 ? Why not 0.01 or 0.0001?

> > > net.trainParam.epochs = 2000;
> >
> > Why?

> maximum training epochs that i want, if not reached the trainparam.goal.

How did you determine that the default value is insufficient?

> > > net=train(net,P,T);
> >
> > Performance evaluation??
> >
> > [ net tr ] = ...
> > MSEtrn = ?
> > MSEval =?
> > MSEtst = ?

>
> I dont understand what it means.


Always include the structure tr when training. It includes a plethora of training information.
In particular, the MeanSquareErrors for trn, val and tst subsets. To see what I mean just
type the following after the training command:

tr = tr

> > Otherwise, how do you obtain separate tr/val/tst results.
> >

> > > P1=[T1;iateste1';wteste1'];
> > > Y = sim(net,P1);
> > >
> > > As you can see, i'm not an expert on this ... i imagine if you could help me build this

> > process of train and validate. Thanks a lot for your help!
> >
> > This is post No. 8 of this thread and you don't seem to be any further along than you were
> > at the first post. So, let's start again

>
> It's a dificult task to explain. The goal of this thread is (code by code) determine how to train
> different sets of [V;Ia;w] defined above, so that my neural net will recognize those 14 datas.


No. This is not a pattern recognition task. You are not trying to develop a 14-class classifier.

You are trying to develop a regression or curvefitting model that estimates the motor load from 14 samples of voltage, current and motor speed measurements.

> > 1. What is a motor model?
> Simulink-SimPowerSystems
>
> There's an induction motor machine model. Resuming, i'm extracting data from this model
> associated with other procediment that does not matter here..
>

> > 2. What is a motor load?
>
> A motor is a device that converts electrical energy into mechanical energy to act upon a
> mechanical load. The burden placed on the motor due to this mechanical activity is referred to as the motor load.
>

> > 3. What are V, ia, w and tq ?
> V -> Voltage
> ia -> Current on phase 'a'
> w-> motor speed
> tq-> it's the load generated according to the type of burden used to train.


Somehow it bothers me to mix up lower case and capitals when choosing variable names. Also, I like to use variable names that someone not familiar with the work can look at and immediately know what it is without going back to look at the definition. For example, s or ms for motor speed.

Is tq torque?

> > 4.What are the corresponding correlation coefficients?
> I think that does'nt matter too

WRONG, WRONG, WRONG.

Corr coefs tell the complete story in a linear model where the variables are standardized.
Although this is not true for linear models, in addition to I/O plots, it usually is the first
quantitative clue as to (1) whether or not a particular input variable is significant for estimating a particular output variable. (2) what the apparent significance rankings are for the inputs.

Not only do I obtain the correlation coefficient matrix for all variables, I also create the
BACKSLASH linear model. Sometimes I also create a reduced variable linear model
using STEPWISEFIT in a backward search mode.

I have found this info useful in a number of difficult NNET designs.

> > 5. What , exactly, are the differences between the 14 data sets?
> Defined what load is, the difference between the 14 data sets is the type with burden
> i'm using on the motor.
>

> > 6. Have you plotted the output to determine how much sample spacing is
> > needed to adequately characterize it?

> 5000 datas is enough to have values from the transitory state to steady state.

This question is about spacing. For example, could you halve the spacing and use
2500 measurements?, etc.

> > 7. Given that spacing, how much data is needed for that characterization?
> 6.

Is 6 an answer or a typo? If the former, what does it mean?

> > 8. Your first post mentions 10,006 measurements but later you use 5,0001.
> Yes. I've cut unnecessary data.
>

> > Is that for each of the 14 data sets?
> one data set is a value of V ; Ia; W. Only 'V' is fix. Ia nd W varies in each of the
14 data sets. There are 5001 values of Ia and 5001 Values of W as there are 5001
>values of fixed V (voltage)

OK. Just checking.Typically, the terminology "data set" refers to the complete collection
of input and output variables for one or more operating conditions.

> > 9. As I stated before
> > 1. Only 1 hidden layer is necessary
> > 2. If you have 14 scenarios that you want to characterize with one net:
> > a. Take 6 and 7 into consideration and combine samples of all 14 into
> > multiple mixed subsets.
> > b. Since you have a large data set, Train/Validate and Test with a
> > 0.34/0.33/0.33 data split.
> > c. Use one or more data sets, as many defaults as possible, and vary
> > H to find the minimum acceptable value.
> >
> > This should give you a solid start.
> >


Thanks for answering the questions. (Again, why trainbr ?)

Hope this helps.

Greg


Date Subject Author
10/17/12
Read Training multiple data for a single feedforwardnet
Carlos Aragon
10/19/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
10/20/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
10/29/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
10/31/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
11/1/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
11/1/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
11/3/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
11/3/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
11/5/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
11/16/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
11/17/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
11/17/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
11/17/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
11/17/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
11/17/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
11/18/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
11/18/12
Read Re: Training multiple data for a single feedforwardnet
Greg Heath
12/1/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon
10/29/12
Read Re: Training multiple data for a single feedforwardnet
Carlos Aragon

Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.