Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: Problem with 1-step ahead prediction in neural network
Replies: 9   Last Post: Oct 23, 2013 6:23 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 5,978
Registered: 12/7/04
Re: Problem with 1-step ahead prediction in neural network
Posted: Oct 19, 2013 12:53 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"phuong" wrote in message <l3s2ha$4qv$1@newscl01ah.mathworks.com>...
> Hi everybody,
> I having a trouble with 1-step ahead of neural.
> When I train network with fix parameter, I received another weight (IW,LW,b).
> I know the reason is random intial weights. But why can we believe the predict result in 1-step if it alway changes for every train. May be the network not convergence. Because when it convergence, we just have only solution( or approximate solution). So is the network convergence?
> All of things make the test result for 100 new predicted by neural network have many results, and some times different between so large.
> Please help me fix these problems.
> Thank you very much.
> Phuong


The only problem is your assumption that there is only one solution. For any I-H-O network configuration with tansig hidden nodes there are (2^H)*H!-1 other nets that are equivalent. For the default value of H=10, there are (2^10)*factorial(10) = 3,715,891,200
equivalent nets.
1. There are H! equivalent nets that only differ by the way they are ordered.
2. Since tansig is an odd function, for each of those orderings there are two equivalent
nets that only differ by the polarity of the weights connected to one of the H hidden nodes.

To make things worse, there can be local minima that are not global minima. The corresponding solutions range from excellent to very poor. Finally, there are other reasons
(e.g., maximum mu in trainlm) that minimization searches fail.

That is why I now use Ntrials = max(10,30/Ntst) random weight initializations for each candidate value of H.

Hope this helps.

Greg



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.