Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
Drexel University or The Math Forum.


condor
Posts:
32
Registered:
3/2/10


Re: GPU and Neural Network ... is this operation legitimate?
Posted:
Nov 20, 2012 5:39 AM


"condor" wrote in message <k8fiu8$hoe$1@newscl01ah.mathworks.com>... > Hi, I need to compute many simple Neural Networks... thousands. > > I would like to use a parallel for on the GPU. > > The FIRST problem is that I am not able to compute the net on the GPU: > > % Net creation > hiddenLayerSize = 10; > net = fitnet(hiddenLayerSize); > % Setup Division of Data for Training, Validation, Testing > net.divideParam.trainRatio = 70/100; > net.divideParam.valRatio = 15/100; > net.divideParam.testRatio = 15/100; > %Training > net.trainFcn = 'trainscg'; > net.trainParam.showWindow = false; > net.trainParam.showCommandLine = false; > % Send input and target on the GPU > input_A=gpuArray(input) > target_A=gpuArray(target) > % Train > [net_new,~] = train(net,input_A,target_A) > > I receive this error: > Error using network/train (line 293) > Number of samples (rows of gpuArrays) of data arguments do not match. > > QUESTION: > Is it possibile to compute the net on the GPU (I want that every core train a different net)? If yes, what am I doing wrong?
Cool! I resolved! This is the code:
>> input_A=gpuArray(input); >> input_A=nndata2gpu(input); >> target_A=nndata2gpu(target); >> net=configure(net,input,target); >> net=train(net,input_A,target_A);
I am not sure if it is using all the GPU or just one core as I want....
Now I need to create a parallel for ... any idea?



