Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: help me. NaR not working
Replies: 5   Last Post: Oct 10, 2013 1:14 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 5,925
Registered: 12/7/04
Re: help me. NaR not working
Posted: Oct 8, 2013 10:35 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"phuong" wrote in message <l30p0q$bh5$1@newscl01ah.mathworks.com>...
.
> Thank for your help. But please talk me more detail. I really do not understand much of your comment.
> 1. I just set ratio for training, validate and test.


No. You did more than that. You defined inputseries and inputseriesval.
DON'T. Just define indices(divideind ) or ratios(divideblock) and the program will take care of the rest.

>In here, I don't want to use test set.

MSEtst is the only UNBIASED performance estimate. MSEtrn and MSEval are BIASED

>Because I will apply the network to specify data.

specific? Then just put that data in test. It will not be used for design.

> 2. I just want to 1-step prediction.

That means FD = 1:
FD = 1, H = 10
net = narnet(FD,H)
...
netc = closeloop(net)

If that is not what you mean, explain in more detail.

Also why do you think there is a difference between lag and delay?

>It means with trained weight, I want to recalculate with new pattern. The system will give me the output for new pattern with the trained weight. I think It different with multi-step ahead prediction.

The net must have a buffer of feedback delays to operate properly. That is what preparets is for.

>And here with the linear data set, i think it will right.

I don't know what that means.

> 3. I agree with you , It is effected by random intialization weight. But if the sytem is stable, I think it just converge to one result.

NO. NO. NO.

If the system is simple (H = []) MOST of the results will be the same. Otherwise NOT!Think of a mountain range. If you start out on one side of a mountain, training causes you to try to go down and stops when you start to go up. However, that local min may be higher than other local mins in the range.

And following your comment, I inserted command 'rng(0) ' before trainning and if the system not fit, it alway not fit and reverse.

No. if you design multiple nets in a loop, the weights will all be different. If you choose the one with the lowest MSEval, you can redesign that if you know rng(0).

I have posted hundreds of multiple loop designs. Search on

neural greg Ntrials

> Please help me agian and show me more detail.

> I really need more helps.

Again, I have posted hundreds of examples in NEWSGROUP and ANSWERS

.Use your search engine.

Greg




Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.