The Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Math Forum » Discussions » Software » comp.soft-sys.matlab

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Degree of freedom in Neural Networks
Replies: 3   Last Post: Dec 12, 2013 5:40 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 6,387
Registered: 12/7/04
Re: Degree of freedom in Neural Networks
Posted: Dec 12, 2013 5:40 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"Florian " <> wrote in message <l8aoo1$n1o$>...
> > Hope this helps.
> >
> > Greg

> Yes!
> Thank You.
> As far as I know the validation stopping is applied to avoid overfitting.


Validation stopping is used to prevent overtraining an over-fit net.

>But what exactly is the reason for bad generalization/not being robust if there are too many weights compared to the number of equations?

When there are more unknowns than equations, Nw-Ntrneq = -Ndof unknowns can have arbitrary values. The remaining Nw-(-Ndof)=Ntrneq unknowns are determined from the Ntrneq training equations. However, that set of Nw unknowns will not be valid for non-training data.

>Also over-fitting?

Over-fitting does not cause poor generalization. The cause is overtraining an over-fit net to fit training data so closely that it does not fit non-training data

Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum at NCTM 1994-2018. All Rights Reserved.