Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Neural Network -- Incremental Training
Replies: 8   Last Post: Jun 5, 2014 11:41 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 5,955
Registered: 12/7/04
Re: Neural Network -- Incremental Training
Posted: May 29, 2014 5:46 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"Marko Kolarek" <kolarek@gmail.com> wrote in message <lm4ms9$ejq$1@newscl01ah.mathworks.com>...
> "George Xu" <gxu5@jhu.edu> wrote in message <ef314ef.1@webx.raydaftYaTP>...
> > Thank you for your input Greg. I'll give this a shot.
> >
> > Thanks again!
> >
> > Greg Heath wrote:

> > >
> > >
> > > George Xu wrote:

> > >> Hello,
> > >>
> > >> I would like to use the Neural Network module to train a

> > network
> > > that
> > >> can be used as a classifier. At a later time when additional
> > data
> > > is
> > >> available, train the network further using the new data but
> > > without
> > >> training everything else again.
> > >>
> > >> Is there a way to load an existing network, perform the

> > training,
> > > and
> > >> add the weights from the new training to the existing network?
> > >
> > > You have to
> > >
> > > 1. Prevent the network from "forgetting the old data" by saving
> > > a Calibration Set which will be combined with the new data
> > > when the net is retrained.
> > > 2. Decide on the relative weighting of the old and new data.
> > >
> > > The best way is to use an RBF (help newrb). However, you might
> > > run into the problem of having more estimated parameters than
> > > you have training equations (Neq = Ntrn*O = product of number
> > > of training cases and number of network outputs). You then have
> > > to write some original code to do one or more of the following
> > >
> > > 1. Remove some redundant RBFs
> > > 2. Merge some neighboring RBFs
> > > 3. Determine the output layer weights using pinv.
> > >
> > > I think you can do 3 by modifying newrb.
> > >

> > >> My current implementation, which does not seem to work is as
> > > follows:
> > >>
> > >> load Oldnet
> > >> [NewNet, tr] = train(OldNet, features)

> > >
> > > Aha! The classical "plasticity/stability dilema" which should have
> > > been named "plasticity/stability tradeoff". If you stick with an
> > > MLP
> > > (help newff) I can't think of a good fix.
> > >

> > >> Any help would be greatly appreciate it.
> > >>
> > >> Thanks!

> > >
> > > The problem is solvable. However, it will take a good understanding
> > > of the newrb source code before you can begin to deal with
> > > mitigating the overfitting problem of more parameters to
> > > estimate than you have training equations.
> > > with new data.
> > >
> > > Good Luck.
> > >
> > > Greg
> > >
> > >

>
> I am sorry to be reviving this old thread, but could you please tell me have you come up with a solution?


I have described the solution above and coded tens of versions in Fortran from 1983 to
1998. However, they were government sponsored and are not available.

A post-retirement MATLAB version in 2004 was lost in a computer crash. I have no desire to try to reconstruct it.



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.