Date: Jan 19, 2013 9:34 PM
Author: Greg Heath
Subject: Re: neural network

"Jamaa Ambarak" <jamaa73@yahoo.com> wrote in message <kd49q6$c8u$1@newscl01ah.mathworks.com>...
---SNIP
> > Dear Greg,
> > the training data for example
> > let we say that there are 15 drivers here and every driver has his own number of lane deviation and they vary e.g d1 has 50 times , d2 has 300, d3 has 400 times ,,,,,,,d15 has 200 times out of lane. the drift is very also some times one second or 2 second and after that the driver take action. so the most important for me the first sample of the lane deviation this happen at lateral position = -/+ 0.81 m and then I cut out the rest. so I have two kinds of windows ''1'' or ''0'' , the first '1' window start one second = 50 samples before the lane happen, and the '0' start after the driver go back to the lane and its same size of window one.the 1 window is [150X50], inputs are 50 samples lateral position, 50 samples speed velocity, and 50 samples steer-angle. and the target is 1 it means out of the lane, as well the 0 window but has target =0. then the total 1 windows and 0

windows
> > for driver d1 100 windows 50/50 in and out. so as well for the other drivers. the
> >training data I did select 10 drivers for training and 5 drivers for testing.


Less bias if you average over 3 trials so that each driver is used twice for training and once for testing.

> > for testing data example d1 has 50 times lane deviation
> the total length of lateral position is 400320 after I cut the duration when the drivers >out of the lane as well same length for other features...

Training data:

The 150-dim input vector corresponding to a "1" target is one second of 3-dim data (@ 50 samples/sec) with lateral deviation < 0.81m JUST BEFORE a continuous interval >= 1sec with lateral deviation >= 0.81 m.

Correct?

The 150-dim input vector corresponding to a "0" target is one second of 3-dim data (@ 50 samples/sec) with lateral deviations < 0.81 m JUST AFTER a continuous interval >= 1sec with lateral deviations > 0.81 m.

Correct?

Testing

The 150-dim input vector corresponding to a test target is a one second sliding window of 3-dim data (@ 50 samples/sec).

Correct?

> > here is the code
> > %% load inputs and targats data
> > Ptemp = P; random 1's or 0's windows
> > Ttemp = T; (0 or 1)
> > rnd=randperm(3858);
> > for ii=1:length(rnd)
> > P(:,ii)=Ptemp(:,rnd(ii));
> > T(:,ii)=Ttemp(:,rnd(ii));
> > end


If they are originally random, why do you have to randomize them again?

> > load y21; testing files
> > A=y21;
> > % mapping and normalizing


You might want to try it without mapping and normalizing

> > [pn,ps]=mapminmax(P);
> > [tn,ts]=mapminmax(T); should I normalizing the target or no


I see no reason to normalize the targets. {0,1} is fine and should
be used with a logsig output transfer function.

> > [an]= mapminmax('apply',A,ps);
> > % %% create network (3 layers with 6 nodes 10 nodes)''


There is no reason to use more than one hidden layer

> >traingdm '','trainscg','trainlm',trainbfg) ,,,, which one should I use as traing function

Use 'trainscg' for classification problems with {0,1} targets. Otherwise use the default 'trainlm' unless there are speed/memory problems.

> > % net = newff(minmax(pn),[8 6 1],{'tansig','tansig','purelin'},'traingdm');
> > which one should I use as activation function..


rand(0) % Initialize the RNG so you can duplicate runs
net = newff(minmax(pn),[ H 1],{'tansig','logsig'},'trainscg');

Optimize a minimum successful H using trial and error.

> > % % view(net)

delete below to use as many defaults as possible

> > % net.trainParam.epochs=1000;
> > % net.performFcn = 'mse';
> > % neuronal_net.trainParam.show =50;
> > % net.trainParam.goal = 1e-6;
> > % net.trainParam.lr=0.3;
> > % net.trainParam.min_grad =1e-6;
> > % net.efficiency.memoryReduction;
> > % net.trainParam.mc=0.6;
> > % net = init(net);
> > % % net=train (net,P,T);


delete above to use as many defaults as possible

net.trainParam.goal = 0.01*mean(var(T))
net.trainParam.show = 10

loop over candidate values for H (10 trials each) to optimize
H and random initial weights

> > % [net,tr] = train(net,pn,tn);

[ net, tr, Y, E ] = train(net,pn,T); % Yn = sim(net,pn); E = T-Yn

I'll let you figure out the rest.

Greg

> > % t_pred=sim(net,pn);
> > % y=sim(net,an);
> > % y1 =mapminmax('reverse',y,ts);
> > % % y1 =mapstd('reverse',y,ts);
> > % predicted = hardlim(y1' - 0.5);
> > please read my notes beside the code and please tell me is it correct or no, I have training the network and give me %100 Sensitivity and %98 Specificity for most testing drivers