Hi there, I'm hoping someone with the neural networks toolbox (and a little more knowledge than me) can answer a few questions.
1) the neural netork toolbox manual says that the trainwh function uses the "Widrow Hoff" learning algorithm. I cant find any direct references to that in Widrow's "Adaptive signal processing" book. Is the Widrow Hoff LR the same as the Least Mean Squares algorithm?
2) The trainwh function returns a bias value. The lms algorithm doesn't address adjustment of the bias input - what rule does this function use for bias adjustment?
3) The lms algorithm is supposed to "jitter about" when it gets close to the minimum error (i.e. the best set of synaptic weights) when I run the training algorithm I get what appears to be an exponential decrease in the error function over time - this is good but it doesn't mesh with the description of the LMS algorithm given in the literature.
Can anyone out there explain? I would really appreciate the help.