Hi! I have the same problem. When I tried using "spmd M = <big data>; end" the program ends in error. I suspect it is because big data too big (over 2.5G) for spmd. Do you know any solutions? Thanks! Edric M Ellis <firstname.lastname@example.org> wrote in message <email@example.com>... > "Chuck37 " <firstname.lastname@example.org> writes: > > > I have a big piece of never changing data that I'd like to be used by > > all the workers in a parallel setting. Since I'm only working with > > local workers, I don't understand why I have to eat the huge latency > > associated with transferring the data to workers each time the parfor > > loop is called. Can someone explain? > > > > I tried to use spmd to send data once at the beginning and have it > > stay there, but the data is kind of big (~2 GB going to 10 workers), > > and I got "error during serialization" anyway. Is there a solution to > > this problem with local workers where they can all access the data > > from the same memory? Accesses are infrequent, so simultaneous access > > shouldn't cause a big slowdown. > > > > Any ideas would be great. > > > > My setup is something like this by the way: > > > > M = big data; > > for x = 1:m > > stuff > > parfor y = 1:n > > a(y) = function(M,a(y)); > > end > > stuff > > end > > > > Parfor is presently worse than 'for' because of the overhead from sending M every time. > > You could use my worker object wrapper which is designed for exactly > this sort of situation. See > > <http://www.mathworks.com/matlabcentral/fileexchange/31972-worker-object-wrapper> > > In your case, you could use it like this: > > spmd > M = <big data>; > end > M = WorkerObjWrapper(M); > > for ... > parfor ... > a(y) = someFunction(M.Value, ...); > end > end > > By building M on the workers directly and building the WorkerObjWrapper > from the resulting Composite, the data is never actually transmitted > over the wire at any stage, so you should experience no problems with > the current 2GB transfer limit. > > Cheers, > > Edric.