The Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.stat.math

Topic: Help on Bayesian Backpropagation
Replies: 0  

Advanced Search

Back to Topic List Back to Topic List  
Charles Lam

Posts: 1
Registered: 12/18/04
Help on Bayesian Backpropagation
Posted: Jul 22, 1996 5:27 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

Hello everyone,

I have read some paper about Bayesian Backpropagation. They are
1. David J. C. MacKay (1992), A Practical Bayesian Framework for Backpropagation Netowrks, Neural
Computation
2. David J. C. MacKay (1992), Bayesian Interpolation, Neural Computation
3. Peter M. Williams (1994), Bayesian Regularization and Pruning Using a Laplace Prior, Neural Computation

These are methods to prevent overfitting and similar to the commonly used regulation method but no
need to specified the regulation constant. To my understanding, these methods can be briefly described as
follows.
1. Start with a first guess of the regulation constant
2. Train the newtork.
3. Estimate the regulation constant again as it can be treated as the sd of prior probability distribution
of the weights.
4. Repeat 2. and 3. until the estimated regulation constant converged.

I have try one of the 'fast and dirty' method from David but I cannot get the converage of the
result. I would like to ask if anyone with the experience on this subject and give me some help? Whether my
understanding on the method is correct? Whether converging result can be obtained from other more accurate
Bayesian methods? Thanks in advance.


Regards,

Charles Lam





Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum at NCTM 1994-2017. All Rights Reserved.