The Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Math Forum » Discussions » Software » comp.soft-sys.matlab

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Problem with 1-step ahead prediction in neural network
Replies: 9   Last Post: Oct 23, 2013 6:23 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 6,387
Registered: 12/7/04
Re: Problem with 1-step ahead prediction in neural network
Posted: Oct 21, 2013 9:02 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"phuong" wrote in message <l4391j$ikb$>...
> Sorry if I made ??you uncomfortable. I'm just trying to understand your answer and use it in my problem. I really do not understand and need your help.

Don't worry about it. I'm just frustrated because I do not understand what your problem
is. Once I understand that, I shouldn't have any problems helping you solve it.

> First of all, I would like to present my understanding about your answer. If it not correct, please explain again for me.
> 1. only weight set for each neural network, just change the oder of weight to make different network.

I thought your initial post referred to THE solution.

I am saying that ANY single layer network, whether good or bad, with H odd parity hidden nodes is one of at least 2^H*H! networks that have EXACTLY the same input/output
relationship. Therefore, there is no such thing as THE solution.

Furthermore, if you encounter one bad design attempt due to validation stopping, a high local minimum, maximum mu or maximum epoch, then there exist at least 2^H*H!-1 other bad designs.

Therefore, my advice is to design enough nets until you are confident that you have enough good ones.

> 2. You use Ntrails to find the acceptable H.

Ntrials (spelling) is the number of design attempts for every candidate value of H. I not only try to find a minimum acceptable H, I also try to find a reasonable number of acceptable I-H-O designs for estimating performance statistics (e.g. quantiles or summary stats min/median/mean/std/max). What is a reasonable number is determined by the designer.

> If my understand is right, I really wonder about the following:
> 1. with only weight set, I think we have only mse for network in train.

I do not understand the statement.

Even it not change in predict on a fix range of time by 1-step( apply weight for new input and calculate mse again).

I don't understand.

> 2. Like above, you have Ntrails result of R2a. so what is the best and what is the intialize weight we should use.

I use R2trna for ranking nets only if I don't have enough data to have a reasonably sized validation set. Otherwise, I use validation set error to rank the nets.

> My goal is find the network with have a stable in mse(minimize) correct trend(up and down to corresponding real-data).

My training goal is to find nets with the smallest value of H for which R2trna >= 0.99.
Although all are acceptable, I rank them according to R2val. The corresponding R2tst
values are all UNBIASED estimates of performance on unseen data. As Ntrials increases,
the estimates and confidence levels tend to become more accurate(mean) and precise (stdv).

As far as choosing a "best" design for operational purposes, I would rank the nets
w.r.t. the performance on the nontraining (val+test) data even though the val set
is considered part of the design (trn+val) set.

Then I would combine the outputs of the best M nets. M is determined by the designer.

Hope this helps.


Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum at NCTM 1994-2018. All Rights Reserved.