Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.


Luis A. Afonso
Posts:
4,758
From:
LIsbon (Portugal)
Registered:
2/16/05


Butchers disguised as surgeons  J´accuse
Posted:
May 14, 2013 6:44 PM


Butchers disguised as surgeons  J´accuse
My comments concerning the J. P. Schneider´s paper ?Caveats for using significant tests . . . arxiv.org/pdf/1112.2516?. (The title of a famous libel from Emile Zola in defense of Alfred Dreyfus is added). _______ The Author, J. Schneider, doesn´t try to hide his purpose. From the Title we are well aware. The first lines of the Abstract it can be read: J. P. S. pgs. 1, 2  Statistical significance tests are highly controversial and numerous criticisms have been leveled against their use. ___ R: You didn´t ever find amazing odd that concerned people are Psychologists, Biologists, Social, Behavioral Sciences and so on, but not one Mathematician/Statistician? Like a butcher not agreed the way kidney transplant are performed in Hospitals? All comes down to these simple points: What´s NHST´s aim, their limitations and what are unable to reach. J.P.S. pg.3 ? We claim that the use of such tests do not provide any advantages in relation to decide whether differences . . . are important or not. R: At the same page the A. supports protests against *the flawed ritual in the detriment of critical (scientific) thinking* he criticize that the statistical differences could be not important in practical trends. HERE there are two misunderstands: A significant result means only that it must not be ascribed, with high plausibility to chance, and when we intend to know if a given positive difference is really present we should include it (adding or subtracting) at the observed value. THIS IS the point that lead some statistical users to *invent*, of course unnecessary, the socalled size effect. For what, I ask, if the procedure is since its invention implicit in the Theory? At absurd Jacob Cohen didn´t know it, which shows a dreadful naivety! ___________________
Elementary things a teenager must get before later as a professional try to *correct* NHST theory (or claim that have *numerous flaws, misconpceptions and misuses*) ___1____When we write the null hypothesis as H0: p=0 , Ha: p=/0 it is NEVER claimed that the parameter is null if the null is zero. Or else, we are unable to prove (or disprove) the null or the alternative hypothesis. Therefore, and definitively, performing the test, we are trying to set if p is not confoundable with zero, then not rejecting p=0, or contrarily data shows that we are unable to discern if 0 is possible. This *possibility* doesn´t means at all that nullity is attained, of course. My *tool* is not sufficiently accurate to assure that. If more data is recalled I could gain sufficient evidence to abandon this conclusion, so to prefer the alternative instead. Here the *smart* antiNHST people say: Why should I believe on NHST if one thing and its contrary do depend on the sample size? Impossible, the test is flawed! ___2___Not surprise: the pvalue is discussed. The *anti* are very pound to emphasize that the *recipe* pvalue<alpha to reject the Null Hypotheses is completely absurd since alpha is a value chosen before the test execution to limit type I error, and pvalue is data depending: ?pvalue is the probability of obtaining a test statistic at least as extreme as the one that was actually observed, assuming that the null hypothesis is true (Wiki)?. It is enough to point out, for their eternal shame, that alpha is currently chosen low (0.05) and with H0 true the set of possible pvalues follow an Uniform Distribution to get the conclusion that is very unlike a pvalue less than alpha coexistent with H0 true and the *mystery* fade away . . . A few simulations and everybody is full convinced.
Luis A. Afonso



