Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
Drexel University or The Math Forum.


Luis A. Afonso
Posts:
4,743
From:
LIsbon (Portugal)
Registered:
2/16/05


The war against NHST
Posted:
Jun 22, 2014 6:47 PM


The war against NHST
In NHST there are a lot of people, Statisticians excluded, that is persuaded (since they were alumni) that the symbol for the null hypothesis condition, namely H0: theta=k, indicates that we aim to establish though the parameter under study has the indicated value. Statisticians do know well that is, in fact, a rather different objective, namely that we simply intend to know that or , taken in account the significance level, ____There is no sufficient evidence from data that the parameter theta is different from the preset value k, therefore not rejecting H0, Or, alternatively (complementary), ___The evidence is sufficient to reject theta=k because data is very unlike that be originated from a Population with theta=k, this very rare occurrence, probability < alpha. This point has direct consequences in what concerns the Type I and Type II classic errors. Instead to say that the Type I error, alpha probability, is made when we reject a true H0, we prefer to change in __ we mistakably reject, duped by a norepresentative sample, in what concerns the parameter. Type II is not (as we read from text?books) to accept wrongly H0 when the alternative hypotheses is true, but __ we are lead wrongly not to reject the null because representative samples (concerning theta) indicates rejection. Of course . . . My proposal to modify statements is of no worth for Statisticians that know at long time and sufficiently well that true and untrue cannot be ascribed to hypothesis, not they use these forthcomings as a technical jargon. The danger is that users from other branches, namely some Psychologists, Biologists, Ecologists, are rather vulnerable to criticize the merits of NHST, leading even to strongly press Scientific Journals to ban the procedure. There is not the first time that Arrogance and Ignorance living together. They have two points they are right: pvalue, directly does not inform us how far data is from the null, the NHST does tell us what the probability of the Null is. The first can be easily improved: use Confidence Intervals with an added quantity [1] to the Null, the second one using a completely different reasoning: the Bayesian way. [1] ? Cohren (I would believe naively . . .) did introduce the D quantity divided by the standard deviation of observed difference) to account for the net difference), what lead me to suppose that he ignores that similar result can be performed by adding/subtracting to the observed difference d and THIS IS, A CURRENT AND STRAIGHT WAY TO OBTAIN A CONFIDENCE INTERVAL FOR IT. See: www.statisticalsolutionssoftware.com/wp...Hauschke... Where H0,d : mu1mu2 <= d ____Ha,d : mu1mu2 > d (1) Which is evidently equivalent to my H0: mu1 mu2 d <=0____Ha: mu1 mu2 d > 0 (2) And a Nonull test (1) is transformed in a null one (2). The confidence interval for d is then T = (mu1 mu2 d) / s s*T1 < mu1 mu2  d < s*T2  s*T1 >  (mu1 mu2) + d > s* T2 mu1  mu2  s*T1 > d > mu1  mu2  s*T2 mu1  mu2  s * T2 < d < mu1  mu2  s*T1 With T1 = approx.  2, T2 = T1: mu1  mu2  2 * s < d < mu1  mu2 + 2 * s where s is the standard deviation of the difference of means. Therefore the CI is centered on the observed difference and its half  amplitude is s * T, T the Student quantiles alpha/2 , 1  alpha/2 where 1  alpha is the confidence level of the CI containing d.
Luis A. Afonso



