> Now... I tried a lot of different operations and I discovered that it is > extremely slow to do > A=(mask==num)*255; (1) > (casting or not num to uint8 changes less than 0.1%) > > I found it insted to be very fast, acceptably for my scope, to do > A= uint8(mask==num)*uint8(255); (2) > > the difference between the (1) and (2) is 300ms for the first one and > 15ms for the second one!!! > A factor 20. > Very curious indeed. > I see you have found what I was going to suggest; namely that doing computations in non-natural word lengths can be very time-consuming owing to the behind-the-scenes casts and addressing problems of not pulling out and working on full words.
I'd expect if you did the operation entirely on doubles and only cast the result for it to be slightly better.
> (still wondering wether I can use logical indexing to do something like: > aa = (mask==num); > maskC(aa,1)= 250; > and get the best performances of "vectorialized" matlab)
No. There's a mismatch in dimensions of the logical addressing array and the target that ML doesn't handle. It seems like it should be possible to do so, but the syntax isn't implemented.
What happens is shown by the following much smaller example of your case...
As you see, when you try to index into M w/ ix, the logical T elements are returned as a vector. The syntax of keeping ix as the two planes and then adding another dimension just doesn't work--ML doesn't know how to do that.
Your only way would be to turn the indices locations in ix above into the linear addresses in the 4D array based on the sizes and the column-major storage order expression.