Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
Drexel University or The Math Forum.



Re: neural nets  trainbr & regularization penalty
Posted:
Mar 1, 2013 4:56 AM


"C" wrote in message <kgli9o$6rj$1@newscl01ah.mathworks.com>... > The trainbr function apparently allows for training of a network with regularization of the network weights. However, the doc does not say anything about how to set the penalty that gets applied to the magnitude of the weights. How can this be set? > > Thanks in advance.
I don't know if there is a magic formula. Many years ago I made multiple designs in a loop over the size of that parameter and chose the design that yielded the lowest test set error. I didn't use a validation set, although I have seen it done.
Remember that the size of the weights depends on the scale of the input and output data. I always standardized both inputs and outputs to zeromean and unit variance (regression). I don't remember trying it on a classification problem.
Maybe a search using greg and/orheath with trainbr will reveal something I forgot.
Hope thishelps.
Greg



