"Florian " <email@example.com> wrote in message <firstname.lastname@example.org>... > > > Hope this helps. > > > > Greg > > Yes! > Thank You. > > As far as I know the validation stopping is applied to avoid overfitting.
Validation stopping is used to prevent overtraining an over-fit net.
>But what exactly is the reason for bad generalization/not being robust if there are too many weights compared to the number of equations?
When there are more unknowns than equations, Nw-Ntrneq = -Ndof unknowns can have arbitrary values. The remaining Nw-(-Ndof)=Ntrneq unknowns are determined from the Ntrneq training equations. However, that set of Nw unknowns will not be valid for non-training data.
Over-fitting does not cause poor generalization. The cause is overtraining an over-fit net to fit training data so closely that it does not fit non-training data