Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
Drexel University or The Math Forum.



Details of self organizing map training...
Posted:
Jan 24, 2013 9:28 PM


This question regards the learnsomb (batch self organizing map learning) function in the neural network toolbox. There is a line of code in this function like a = a .* double(rand(size(a))<0.9);... it sets 10% of the values of the matrix a randomly to be 0. This is effectively like ignoring 10% of the training data during each iteration.
Does anyone know why this is here? I can't think of any good reason to do this. It seems to prevent the algorithm from ever converging to a stable solution since the training data is constantly changing. The documentation doesn't give any references for where their algorithm came from.
Matt



