Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
Drexel University or The Math Forum.


condor
Posts:
30
Registered:
3/2/10


GPU and Neural Network ... is this operation legitimate?
Posted:
Nov 20, 2012 4:36 AM


Hi, I need to compute many simple Neural Networks... thousands.
I would like to use a parallel for on the GPU.
The FIRST problem is that I am not able to compute the net on the GPU:
% Net creation hiddenLayerSize = 10; net = fitnet(hiddenLayerSize); % Setup Division of Data for Training, Validation, Testing net.divideParam.trainRatio = 70/100; net.divideParam.valRatio = 15/100; net.divideParam.testRatio = 15/100; %Training net.trainFcn = 'trainscg'; net.trainParam.showWindow = false; net.trainParam.showCommandLine = false; % Send input and target on the GPU input_A=gpuArray(input) target_A=gpuArray(target) % Train [net_new,~] = train(net,input_A,target_A)
I receive this error: Error using network/train (line 293) Number of samples (rows of gpuArrays) of data arguments do not match.
QUESTION: Is it possibile to compute the net on the GPU (I want that every core train a different net)? If yes, what am I doing wrong?



