Luis A. Afonso
Posts:
4,758
From:
LIsbon (Portugal)
Registered:
2/16/05


Re: Where the lines go cross each other . . .
Posted:
Apr 15, 2014 6:53 AM


A simulating method to test Hypothesis
Introduction
It is widely consensual that the Null Hypothesis Significance Tests, NHST, is a reliable and useful, massively used tool, providing worthy information in order how to get a hint what happens in Nature through observation/experiment and the believing/assumption concerning what is going on. In Classic Statistics, however, NHST, have some features that are, however, somewhat critical, namely: they are unable to state that an hypotheses is true or untrue, it is strongly biased by 1/20  starts to assume the Null Hypotheses H0 true  therefore protecting it against its rejection in favor of the alternative hypothesis, Ha. Finally it provides the probability that data is in conflict with the Null Hypotheses, not the probability the Null given data, which is a different (Bayesian) approach.
The algorithm
Suppose we start by to obtain the 2.5% , 97.5% bounds of the confidence interval H0 and Ha, respectively w0, w´0, wa, w´a.
By the Fig. The symbols u,v are stated by order: Null, Alternative and where 0 notes fail to reject the Hypothesis, 1 reject it.
________w0____________w´0 ________a______________a´ _________________________________ wa________ w´a __________________________________ b__________b´ ______________(0,1) ________(0,0)_____(1,0)______
If, adjusting the size, we put the bounds b and a´ coincident we can discriminate as follows: if the test is less than a´ (and larger than a) we can state (0, 1): fail to reject the alternative, alpha <= 5%: test larger than a´ (but not larger than b´) then is (1, 0) with Type II error H0<= 5%, Power = 95% at least. Of course if the test lays w0 left (or to w´a right) both hypotheses are rejected. What is ´new´ here? __a) The H0 tolerance is eliminated. Both Hypotheses have the same chances to be rejected, not 5% for H0 to be significant, constraining H1 to be chosen by exclusion only (NeymanPearson method), __b) Owing to the sample under test is obtained by simulation, the parameter under test, originated from repetition (under model) the test is exact: the p value is equal to those get by simulation: no approximate model.
Not more than an idea . . . The thread shows examples of comparison between Uniform, Normal Standard and Gumbel (0,1) Distributions through skewness, onetail tests, not two?tail, as is here proposed.
Luis A. Afonso

