Drexel dragonThe Math ForumDonate to the Math Forum

Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.

Math Forum » Discussions » sci.math.* » sci.stat.math.independent

Topic: Statistically significant
Replies: 0  

Advanced Search

Back to Topic List Back to Topic List  
Luis A. Afonso

Posts: 4,613
From: LIsbon (Portugal)
Registered: 2/16/05
Statistically significant
Posted: Sep 21, 2013 2:49 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

What a mess! After all is it statistically significant (!)

The sample size and NHST

Those that do insist in to think that probabilistic (random) Phenomena are compatible with Aristotelian Logic easily reach incongruent conclusions; consequently the insufficiency is ascribed to the method not to the user. So a lot of people claiming that NHST are wrong /useless . . .
For the hell one found, for example, H0 true with a 10 sized sample and it does not for n=100 at the same significance level alpha?
First of all it should be noted that the null Hypotheses H0: k=0 doesn´t mean that we are trying to find out if the parameter k is null (surprise!) . . . . Otherwise we are trying to show though it is true or not that given the data under analysis the estimated parameter k is so near from zero so that it shouldn´t be rejected.
The second thing to keep in mind is that a sample, even randomly drawn from a population, is not a miniature of it. Instead each random sample has its own *personality* and surely they provide from rather coarse to excellent estimations of the parameter at inspection.
Parameter´s Estimation
In matter of fact the estimate is a certain random variable k^ of data which difference from k is unknown. In what concerns Consistent Estimators, CE, the larger the size, the smaller should be the mapping values f (| k^ - k |), or alternatively f (sum (k^ - k)^2), which are directly established from Markov and Chebychev inequalities. Since they are found, CE´s, will provide the parameter´s exact value by the limit (infinite size) of the sequence estimations. The same parameter could be estimated through different estimators. Of course is not indifferent what the one chosen is in practice: it is preferable, in economic terms, the one that need less data given a preset absolute difference from the exact value.

Is it acceptable the change, when more data is available, not significant to significant, if so? Yes, absolutely! It could not be otherwise. . .
Explanation: Under condition H0: k=0 false and Consistent Estimators a difference, even very small, between the real value of a parameter and the set of estimates it will be more and more expectable to be detected as the amount of data grows because the information, the odd data values in minority, leads that is less and less liable the parameter´s value to be taken as the preset Null one (which, of course, could not be 0, as we suppose by commodity, but any real constant chosen a priori).


In order to stress we can reach *illogical* results at the Classic (Aristotelian) point of view let the following example. We repeat at the same conditions, for two independent data, the NHST finding p1=0.08 and p2=0.07, so it seems that we had found a not significant value (alpha= 0.05) followed by the confirming not significant. BUT
_________FisherH= -2*ln(.08) - 2*ln(.07)= 10.37
_________prob (H>10.37| chisquared 2df) = 0.0028
Which is largely significant then we must rejected the null at 5% significance.

Luis A. Afonso (Set. 20, 2013)

Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.