Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.stat.math.independent

Topic: Trying to understand Bayes and Hypothesis
Replies: 11   Last Post: Feb 22, 2013 3:09 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
divergent.tseries@gmail.com

Posts: 40
Registered: 7/29/12
Re: Trying to understand Bayes and Hypothesis
Posted: Feb 20, 2013 3:14 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

Wow, I hadn't expected my post to generate such a fire storm. I guess neutrality IS a big deal, LOL.

Okay, so let me carefully address "significance." Overall, I agree with you. Indeed, as memory serves me, Fisher preferred the reporting of p values instead of the rejection/acceptance region idea. But that isn't really the issue.

It isn't always about an action. In business or engineering applications there is usually an action to be taken and significance testing neither includes cost functions nor gives you automatic direction. Rejecting an hypothesized value for the precision of a machine may be informative, but failing to reject isn't informative. That is clearly a problem. On the other hand, in science it can be valuable to perform significance testing.

Fisher, again this is as I understand it, started down the Frequentist path in order to place some regularity on problems where no prior data existed and I do believe he stated that the Bayesian method should be used where prior information existed.

Science is a combination of deductive and inductive reasoning. Inductive reasoning is incomplete by its nature, even if in practice it is powerful. Frequentism can be powerful if it is properly used as an extension of modus tollens. Indeed, many areas of error within science have come from using inductive reasoning rather than deductive reasoning. The fact that the point null hypothesis is always false isn't a problem for two reasons that I will show below.

Let us imagine that you believed that someone was a con man and for some reason you wanted to invest your money to prove it. You gamble in either a coin tossing game or a game similar to three card monte. It doesn't matter really.

You decide to be Frequentist and use the null hypothesis that p(coin=heads)=1/2 or if you are doing three card monte that each choice has a 1 chance in 3 of happening.

If you flip a US quarter enough times you can show, some grad students at Harvard did this for some reason, that it is not a fair coin. I think they had a robot flip the coin 50 or 100,000 times. Likewise, psychological research on the ability of humans to replicate random behavior shows that humans will not choose all three piles evenly.

If you flip the coin enough, you are guaranteed to get evidence the other party is not using a fair coin even if it is a fair coin in the sense they are not engaging in policy to cheat the other party. This is true as a Frequentist or a Bayesian.

Except in the real world we rarely flip the coin enough. Statistics is an approximation. If you really have enough data, you don't really need statistics. If you stand Michael Jordan against a five year old and you are asked who is taller, getting out a measuring tape would only be required by journal editors.

The Frequentist hypothesis may be false, but that is okay, as the goal is to prove that it is really really false and not just really false. A frequentist is simply saying, "I believe the world works thus and if it is true then certain things should rarely happen." They could happen, but it would be surprising and you should probably consider alternate models if they do.

That said, you would still be better off with the Bayesian model as there is information in the problem that is ignored. For a con man to get away with such a thing, the coin needs to look fair and so an ignorance prior is inappropriate.

Frequentism is helpful when two criteria are met. One, you don't have enough data to decide without statistical tools. Two, you can define clearly what "I am wrong," means. The broad error in statistical education is no one teaches that.

No one sits down and says, this is a Bayesian problem or this is a Frequentist problem. Why we ram t-tests down the throats of sophomores is beyond me. Why no one has constructed a book on reasoning makes no sense at all.

One more note, I try and avoid self-consistent behavior. I realized when I was 26 years old and Santa didn't bring me a present that year that the universe might not be self consistent. I do admit it wasn't my best year on the "naughty/nice" spectrum, but you don't expect the rug to be pulled out from you. Since then his track record has been spotty. Figuring that the universe can fail at being self-consistent, I don't mind a little inconsistency either.



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.