The Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by NCTM or The Math Forum.

Math Forum » Discussions » sci.math.* » sci.stat.math

Notice: We are no longer accepting new posts, but the forums will continue to be readable.

Topic: Where the lines go cross each other . . .
Replies: 5   Last Post: Apr 15, 2014 10:43 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Luis A. Afonso

Posts: 4,758
From: LIsbon (Portugal)
Registered: 2/16/05
Re: Where the lines go cross each other . . .
Posted: Apr 15, 2014 6:53 AM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

A simulating method to test Hypothesis


It is widely consensual that the Null Hypothesis Significance Tests, NHST, is a reliable and useful, massively used tool, providing worthy information in order how to get a hint what happens in Nature through observation/experiment and the believing/assumption concerning what is going on.
In Classic Statistics, however, NHST, have some features that are, however, somewhat critical, namely: they are unable to state that an hypotheses is true or untrue, it is strongly biased by 1/20 - starts to assume the Null Hypotheses H0 true - therefore protecting it against its rejection in favor of the alternative hypothesis, Ha. Finally it provides the probability that data is in conflict with the Null Hypotheses, not the probability the Null given data, which is a different (Bayesian) approach.

The algorithm

Suppose we start by to obtain the 2.5% , 97.5% bounds of the confidence interval H0 and Ha, respectively w0, w´0, wa, w´a.

By the Fig.
The symbols u,v are stated by order: Null, Alternative and where 0 notes fail to reject the Hypothesis, 1 reject it.

_________________________________ wa________ w´a
__________________________________ b__________b´
________|______(0,1) ____|____(0,0)__|___(1,0)____|__

If, adjusting the size, we put the bounds b and a´ coincident we can discriminate as follows: if the test is less than a´ (and larger than a) we can state (0, 1): fail to reject the alternative, alpha <= 5%: test larger than a´ (but not larger than b´) then is (1, 0) with Type II error H0<= 5%, Power = 95% at least. Of course if the test lays w0 left (or to w´a right) both hypotheses are rejected.
What is ´new´ here?
__a) The H0 tolerance is eliminated. Both Hypotheses have the same chances to be rejected, not 5% for H0 to be significant, constraining H1 to be chosen by exclusion only (Neyman-Pearson method),
__b) Owing to the sample under test is obtained by simulation, the parameter under test, originated from repetition (under model) the test is exact: the p value is equal to those get by simulation: no approximate model.

Not more than an idea . . . The thread shows examples of comparison between Uniform, Normal Standard and Gumbel (0,1) Distributions through skewness, one-tail tests, not two?tail, as is here proposed.

Luis A. Afonso

Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© The Math Forum at NCTM 1994-2018. All Rights Reserved.