The A. do start to point out that critics came mainly from people of Psychology and Education. We add: not Statisticians or Mathematicians, areas where, I think, more accurate/learned people is liable to be found. They elect the NHST major problem being, quoting Kirk, that it do not provide what the researcher finally wants to know: if the null can pass given the observed data (Bayes), not if data shouldn?t be rejected under the hypotheses. Not to object. Simply the Classic paradigm is that: to reject H0 is because it is false, or, being true, the probability to get data as the observed is very small. Because we set this probability as 5%, we do not make this Type I error 19 times out of 20. Quoting Kirk again: because the Null is always false, and with sufficient data one will surely to accept it, to test hypotheses is a trivial exercise. WRONG: we must not get, H0: t=0, as an arithmetic statement but else that we are not (if so) are unable to make distinction by the test if the parameter is different than 0. The second part is true, but we ask: what the usefulness is if only n= 1´000´000´000 (one thousand million) sample size is able to assure that t is just larger than d= 0.000´000´001? Or, else, n is the number of flips enough to warrant that p= 0.5 + d is the probability heads up? Continuing reading the paper we agree completely with Cohen and Mulaik, Raju & Harshman that ?a no difference should be replaced by a null hypotheses specifying a non-zero value based on previous research. And we add: or connected with the ?practical/economic importance? attached to the new finding.