Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » Software » comp.soft-sys.matlab

Topic: weight in neural network
Replies: 1   Last Post: May 2, 2013 4:06 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Greg Heath

Posts: 5,953
Registered: 12/7/04
Re: weight in neural network
Posted: May 2, 2013 4:06 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

"srishti" wrote in message <klrd8a$5m7$1@newscl01ah.mathworks.com>...
> Hello Sir,
> Sir if i have tried the following code, the problem if I generate W1 and W2 then weights net.IW{1,1} and net.LW{2,1} are diffrenet and if i dont use W1 and W2 then net.IW{1,1} and net.LW{2,1} are different. Sir how the weights are related
> with W1 and W2 ?
>
>
> s = RandStream('mcg16807','Seed', 0);
> RandStream.setDefaultStream(s)
> x=sinimfin; %input
> t=t; %target
> S1=1; % number of hidden layers
> S2=2; % number of output layers (= number of classes)


1. You are confusing the terms "layer" and "node".
If the input and output target matrices have dimensions

[ I N ] = size(input)
[ O N ] = size(target), % (O classes),

the typical NN has a single input layer with I nodes, a single
hidden layer with H nodes and a single output layer with O nodes
yielding a I-H-O node topology. In addition there is a single input
bias node and a single hidden layer bias node. The bias nodes
provide constant inputs that allow signals to be shifted vertically
without changing shape.

The input weight matrix IW, input bias weight vector, b1, layer weight
matrix, LW, and output bias weight vector, b2 have the sizes

[ H 1 ] = size(b1)
[ H I ] = size(IW)
[ O 1 ] = size(b2)
[ O H ] = size(LW)

The corresponding hidden and output layer signals are given by

hidden = tanh(IW*input + b1);
output = LW*hidden + b2;

> [R,Q]=size(x);
> W1= rand(S1,R);
> W2= rand(S2,S1);


2. You have created nonnegative random weight values in (0,1) instead of using the
function randn that will create bipolar weight values.
3. You have ignored bias weights.
4. You have not assigned the weights to a net.
5. You do not have to initialize weights
a.The older creation functions e.g., newfit, newpr and newff, automatically initialize
weights designed to cover the function space created by input and target.
b. The current creation functions, e.g., fitnet, patternnet and feedforwardnet do not.
However, the current version of the function train will automatically do it if you
have not done it already with function configure.

> net = patternnet(4);
> net = train(net,x,t);
> % view(net)
> y=net(x);
> plotconfusion(t,y);
> perf=mse(y-t);


If you use the expanded output form of train

[ net tr y e ] = train(net,x,t);

You will not only automatically get the output y, you will also get the error e = t-y, and a training structure, tr , with almost all of the other information about training and performance of the training, validation and test subsets that you could wish for.

Take the time to investigate what tr has to offer

tr = tr

If you really want to assign your own weights, try configure.

help/doc configure

Hope this helps.

Greg



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.