Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Yatani´s and NHST
Replies: 4   Last Post: Oct 27, 2012 4:30 PM

 Messages: [ Previous | Next ]
 Luis A. Afonso Posts: 4,758 From: LIsbon (Portugal) Registered: 2/16/05
Yatani´s and NHST
Posted: Oct 26, 2012 3:01 PM

Yatani´s and NHST

http://yatani.jp/HCIstats/NHST
______________________________________________

At least two dangerous ways can be found in the anti-NHST papers: to use common language to describe the method and not sufficient care in to be learned what the propose of this tool.

Quoting:
* Myth 1: Meaning of p value
Let's say you have done some kinds of NHST, like t test or ANOVA. And the results show you the p value. But, what does that p value mean? You may think that p is the probability that the null hypothesis holds with your data. This sounds reasonable and you may think that is why you reject the null hypothesis. The truth is that this is not correct. Don't get upset. Most of the people actually think it is correct. What the p value means is if we assume that the null hypothesis holds, we have a chance of p that the outcome can be as extreme as or even more extreme than we observed. Let's say your p value is 0.01. This means you have only 1% chance that the outcome of your experiment is like your results or shows a even clearer difference if the null hypothesis holds. So, it really doesn't make sense to say that the null hypothesis is true. Then, let's reject the null hypothesis, and we say we have a difference. The point is that the p value does not directly mean how likely what the null hypothesis describes happens in your experiment. It tells us how unlikely your observations happen if you assume that the null hypothesis holds. So, how did we decided "how unlikely" is significant? This is the second myth of NHST.*~
__________________
My Comment
The Aristotelian logic, right, wrong, is unproductive in what concerns random experiments. In particular one cannot prove that a hypotheses is true or not: the only thing we are able to attain is if it is likely or unlike, and an error is always connected with whatever conclusion.
Yatani says (rightly) that the p-value does not say us how likely, and never measures the probability H0 holds, nobody state. However we know that if H0 is untrue the p-values are crowed at the left of 1, on contrary if the null is true they tend to be uniformly dispose in [0 , 1]. What is inductively proper to think if my p-value (unique) falls very close to 1?
I left the conclusion to the reader?s common sense.
For me, I have no doubt: Using simulated normal data I check that how the Population mean is far from the tested value (H0) more and more p-values are close to the right extreme.

Luis A. Afonso

Date Subject Author
10/26/12 Luis A. Afonso
10/26/12 Luis A. Afonso
10/27/12 Luis A. Afonso
10/27/12 Luis A. Afonso
10/27/12 Luis A. Afonso