I have read some paper about Bayesian Backpropagation. They are 1. David J. C. MacKay (1992), A Practical Bayesian Framework for Backpropagation Netowrks, Neural Computation 2. David J. C. MacKay (1992), Bayesian Interpolation, Neural Computation 3. Peter M. Williams (1994), Bayesian Regularization and Pruning Using a Laplace Prior, Neural Computation
These are methods to prevent overfitting and similar to the commonly used regulation method but no need to specified the regulation constant. To my understanding, these methods can be briefly described as follows. 1. Start with a first guess of the regulation constant 2. Train the newtork. 3. Estimate the regulation constant again as it can be treated as the sd of prior probability distribution of the weights. 4. Repeat 2. and 3. until the estimated regulation constant converged.
I have try one of the 'fast and dirty' method from David but I cannot get the converage of the result. I would like to ask if anyone with the experience on this subject and give me some help? Whether my understanding on the method is correct? Whether converging result can be obtained from other more accurate Bayesian methods? Thanks in advance.