Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Topic: Training multiple data for a single feedforwardnet
Replies: 19   Last Post: Dec 1, 2012 6:14 PM

 Messages: [ Previous | Next ]
 Carlos Aragon Posts: 11 Registered: 10/17/12
Re: Training multiple data for a single feedforwardnet
Posted: Oct 29, 2012 2:29 PM

Greg, thanks in advance. You're helping a lot!

You said:

(..)

The best is to use a modication of NEWRB that allows the input of an initial
> hidden layer. Then
>
> 1. After training with set1, use those weights as initial weights for training with set2 + set1.
>
> 2. After training with set1, use those weights as initial weights for training with set2 and a "characteristic subset" of set1. The drawback is how to define that characteristic.
>
> The reason this works is that each hidden node basis function has local region of influence and a 1-to-1 correspondence with a previous worst classified training vector.

(...)

I'm facing problems to perform this action on matlab. Is there any automated way there i can record set1 and then use it to train a set2? How could i do it? Actualy, i want my feedforwardnet to recognize 14 sets of diferent motor loads.

Thanks!!

"Greg Heath" <heath@alumni.brown.edu> wrote in message <k5v9a4\$pj6\$1@newscl01ah.mathworks.com>...
> "Carlos Aragon" wrote in message <k5n37h\$ier\$1@newscl01ah.mathworks.com>...
> > I'm building a feedforwardnet like this:
> >
> > (..)
> > P=[V';ia';w'];
> > T=[tq'];
> > net=feedforwardnet([5 25],'trainbr');
> > (..)
> >
> > How could i train this neural net for more then one group '[V';ia';w']' ? How is the matlab structure to perform this kind of training?
> >
> > Note that 'P' in this case is a 10006x3 matrix that i extract from a motor model.

>
> The issue here is that after training with set1, the weights will forget set1
> while they are learning set 2. There are a variety of ways to mitigate forgetting.
>
> The best is to use a modication of NEWRB that allows the input of an initial
> hidden layer. Then
>
> 1. After training with set1, use those weights as initial weights for training with set2 + set1.
>
> 2. After training with set1, use those weights as initial weights for training with set2 and a "characteristic subset" of set1. The drawback is how to define that characteristic.
>
> The reason this works is that each hidden node basis function has local region of influence and a 1-to-1 correspondence with a previous worst classified training vector.
>
> Hope this helps.
>
> Greg

Date Subject Author
10/17/12 Carlos Aragon
10/19/12 Greg Heath
10/20/12 Greg Heath
10/29/12 Carlos Aragon
10/31/12 Greg Heath
11/1/12 Carlos Aragon
11/1/12 Greg Heath
11/3/12 Carlos Aragon
11/3/12 Greg Heath
11/5/12 Carlos Aragon
11/16/12 Carlos Aragon
11/17/12 Greg Heath
11/17/12 Carlos Aragon
11/17/12 Greg Heath
11/17/12 Carlos Aragon
11/17/12 Greg Heath
11/18/12 Carlos Aragon
11/18/12 Greg Heath
12/1/12 Carlos Aragon
10/29/12 Carlos Aragon