
Re: Training multiple data for a single feedforwardnet
Posted:
Oct 29, 2012 2:29 PM


Greg, thanks in advance. You're helping a lot!
You said:
(..)
The best is to use a modication of NEWRB that allows the input of an initial > hidden layer. Then > > 1. After training with set1, use those weights as initial weights for training with set2 + set1. > > 2. After training with set1, use those weights as initial weights for training with set2 and a "characteristic subset" of set1. The drawback is how to define that characteristic. > > The reason this works is that each hidden node basis function has local region of influence and a 1to1 correspondence with a previous worst classified training vector.
(...)
I'm facing problems to perform this action on matlab. Is there any automated way there i can record set1 and then use it to train a set2? How could i do it? Actualy, i want my feedforwardnet to recognize 14 sets of diferent motor loads.
Thanks!!
"Greg Heath" <heath@alumni.brown.edu> wrote in message <k5v9a4$pj6$1@newscl01ah.mathworks.com>... > "Carlos Aragon" wrote in message <k5n37h$ier$1@newscl01ah.mathworks.com>... > > I'm building a feedforwardnet like this: > > > > (..) > > P=[V';ia';w']; > > T=[tq']; > > net=feedforwardnet([5 25],'trainbr'); > > (..) > > > > How could i train this neural net for more then one group '[V';ia';w']' ? How is the matlab structure to perform this kind of training? > > > > Note that 'P' in this case is a 10006x3 matrix that i extract from a motor model. > > The issue here is that after training with set1, the weights will forget set1 > while they are learning set 2. There are a variety of ways to mitigate forgetting. > > The best is to use a modication of NEWRB that allows the input of an initial > hidden layer. Then > > 1. After training with set1, use those weights as initial weights for training with set2 + set1. > > 2. After training with set1, use those weights as initial weights for training with set2 and a "characteristic subset" of set1. The drawback is how to define that characteristic. > > The reason this works is that each hidden node basis function has local region of influence and a 1to1 correspondence with a previous worst classified training vector. > > Hope this helps. > > Greg

