Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.

Topic: Bayesian Neural Networks and Uncertainty 'Error' Bars
Replies: 3   Last Post: Jul 26, 2012 12:58 PM

 Search Thread: Advanced Search

 Messages: [ Previous | Next ]
 John Foltz Posts: 1 Registered: 5/12/10
Bayesian Neural Networks and Uncertainty 'Error' Bars
Posted: May 12, 2010 7:46 PM
 Plain Text Reply

Hi, this is my first post on the newsgroup. I'm hoping someone can help me with some neural network modeling that I've been working on for a few weeks now. I've searched the newsgroup, other websites, and even the Neural Network Toolbox user guide for help, but haven't found my answers yet.

I'm trying to take an array of inputs, that I've already normalized using the mapminmax function, and relate them to a single output. The rows in my input array represent the various measurements that describe a condition. Each column is a different condition. Likewise, the output array is a single row array, with each column representing the resultant for each condition. The inputs and output are scalars.

My goal is to train a neural network using the bayesian training algorithm. I would like 1 hidden layer consisting of 3 nodes. I want to have the model compute the errors of both the training and validation data partitions. I'm using dividerand to segment my data into the training and validation sets.

From what I understand, Bayesian statistics assume that the biases are actually something like gaussian functions. I would like to use the tails of the biases as a way to estimate the error of the predictions. Is there a way I can find out what these are? I need to be able to feed the trained network 'virtual data', or conditions not originally in the training set, and find out what the output predictions and uncertainties are.

I have Linux based program that I'm upgrading from that can do what I'm asking about, but I don't know how it's done. It was written by David McKay in the 1990s, so its good, but seriously cumbersome.

If you can be of any help, please set me in the right direction. I'm trying to use this to complete a PhD in engineering. Below is the code I'm using right now. The seeds and biases are there to make the code repeatably predict the same result, and have as accurate a training as possible. Thanks!!

-John

seed=[100,1000,10000,100000,1000000,10000000];
bias = [0.1,0.5,1,5,10,20,100,1000,10000,1000000,10000000];
counter1 = 1;
counter2 = 1;

% LOAD Inputs as "INS" and output as "OUT" into the workspace

%% Scaling network & Segmentation

[ins.all ins.map] = mapminmax(INS);
out.logged = log(OUT); %The output data I'm modeling is exponential in nature
[out.all out.map] = mapminmax(out.logged);

[ins.train,ins.validate,ins.testset,ins.trainInd,ins.valInd,ins.testInd] = dividerand(ins.all,.8,.2,0);
[out.train,out.validate,out.testset] = divideind(out.all,ins.trainInd,ins.valInd,ins.testInd);
for counter1 = 1:size(seed,2)
for counter2 = 1:size(bias,2)
rand('seed',seed(counter1));

net=newff(ins.train,out.train,3,{'tansig'},'trainbr');
net.trainParam.showWindow = 0;
net.divideFcn = '';
net.performParam.ratio= bias(counter2); %Does this set the initial bias weights?
% net.trainParam.show = 10; %Not sure what this does, but its in examples
net.trainParam.epochs = 300;
[net,tr]=train(net,ins.train,out.train);
name=strcat('nnSeed',num2str(counter1),'Bias',num2str(counter2))
nndatabase.(strcat('nnSeed',num2str(counter1),'Bias',num2str(counter2))) = net;
nntraining.(strcat('nnSeed',num2str(counter1),'Bias',num2str(counter2))) = tr;
Errors(counter1,counter2)= tr.perf(end);
end
end

Date Subject Author
5/12/10 John Foltz
7/26/12 Piotr
7/26/12 Greg Heath
7/26/12 Piotr

© Drexel University 1994-2013. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.