"Ton Schomaker" <email@example.com> wrote in message <firstname.lastname@example.org>... > "Greg Heath" <email@example.com> wrote in message <firstname.lastname@example.org>... > > "Ton Schomaker" <email@example.com> wrote in message <firstname.lastname@example.org>... > > > I have trained a very good ff neural network with the restiction that any increase of each input parameters A, B, C... must give higher output values (R2 = 0.94). However when I use the best trained NN using better (i.e. increased) input data the output sometimes is lower than the original target value. > > > Does someone have a solution for this problem? > > > > Good solutions have a mean error of 0. Therefore, ~50% of the answers should be above the target value. > > > > Hope this helps. > > > > Greg > > You are right Greg, but that is not my problem. Let me try to explain. I am using NN to improve water quality of natural swimming waters. So I use data from a lot of these water with amongst others the amount of sewer outlets. With less outlets as input for the trained NN I sometimes get worse water quality while expecting a better one. > How can I train a new networks that avoids this error? > > Kind regads, Ton
I'm sorry I don't fully understand your problem. However, the basic assumption is that the training data adequately characterizes the probability distribution of the operational data. If that is not true, then you have to add simulated data which contains the correct characteristics.
Well trained NNs can be excellent interpolators BUT terrible extrapolators.
I would expect if you used the smallest amount of hidden nodes possible, you might get better extrapolation; However, don't bet on it with the kids tuition.