Though common sense is used nobody doubts that Fisher was right: The null hypothesis cannot be proved true, the NHST was not made to answer/decide this completely impossible objective. Contrarily, the procedure measures how plausible the null is in comparison with the present data if the Null Hypothesis is supposedly true. Neither more nor less, absolutely. A lot of people, Psychologists in general, unreasonably decide to attack NHST at various aspects in particular that it does not provide how large data from the null is. Well, they are right; a significant test result only says that the difference cannot be ascribed to random fluctuation exclusively. It seems they ignore that this is impossible to attain through testing. Supposedly they were not, as student in Mathematical Statistics, properly informed and surely they are so sloppy and stubborn that did not or even eventually proposed some day, to review their thoughts. ____________
I risk betting how much puzzled they stay reading on text-books expressions like: Z= (xbar - ybar - 0)/ s Minus zero, they claim: why such a thing is here, though is zero it must be dropped! Who is sufficiently brave to persuade them 0 is uppermost important? Let´s change 0 by d. The new test statistics if following a Normal N(0,1) Distribution, the 95% Confidence Interval is -1.96 < (xbar - ybar - d)/ s < 1.96 (xbar-ybar) - 1.96*s < d < (xbar-ybar) + 1.96 *s This means that I got an interval, centered at the observed difference of means, xbar- ybar, and amplitude approximately two times its standard error, where the Population´s difference on mean values is likely to lie with 95% probability. Sorry, theory could not be beyond. . .