Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Degree of freedom in Neural Networks
Replies: 3   Last Post: Dec 12, 2013 5:40 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 5,871
Registered: 12/7/04
Re: Degree of freedom in Neural Networks
Posted: Dec 12, 2013 5:40 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"Florian " <bonsaiflo@hotmail.de> wrote in message <l8aoo1$n1o$1@newscl01ah.mathworks.com>...
>
> > Hope this helps.
> >
> > Greg

>
> Yes!
> Thank You.
>
> As far as I know the validation stopping is applied to avoid overfitting.


NO!!!

Validation stopping is used to prevent overtraining an over-fit net.

>But what exactly is the reason for bad generalization/not being robust if there are too many weights compared to the number of equations?

When there are more unknowns than equations, Nw-Ntrneq = -Ndof unknowns can have arbitrary values. The remaining Nw-(-Ndof)=Ntrneq unknowns are determined from the Ntrneq training equations. However, that set of Nw unknowns will not be valid for non-training data.

>Also over-fitting?

Over-fitting does not cause poor generalization. The cause is overtraining an over-fit net to fit training data so closely that it does not fit non-training data



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.