Bayesians and FrequentistsDate: 08/21/2001 at 21:49:56 From: Ellen Subject: Bayesian Statistics? Dr. Math - I am going to be taking a statistics course this semester and noticed a chapter called "Bayesian Statistics." What is the difference between this and "regular" statistics? Thanks, Ellen Date: 08/22/2001 at 15:27:17 From: Doctor Jordi Subject: Re: Bayesian Statistics? Hi, Ellen - thanks for writing to Ask Dr. Math. The difference between Bayesian statistics and regular (Frequentist) statistics is essentially a different interpretation of what probability signifies, and thus a different way to make an inference about a population given that we have a sample of that population. When I tell you, "The probability that this coin lands heads is 1/2," what do you make of it? There are a couple of ways to think about it. A frequentist, and I imagine that you are more familiar with this interpretation, reasons as follows: If the probability of landing heads is 1/2, this means that if we were to repeat the experiment of tossing the coin very many times, we would expect to see approximately the same number of heads as tails. That is, the ratio of heads to tails will approach 1:1 as we toss the coin more and more times. A Bayesian, however, would interpret that statement in a different way: For me, probability is a very personal opinion. What a probability of 1/2 means to me is different from what it might mean to someone else. However, if pressed to place a bet on the outcome of tossing a single coin, I would just as well guess heads or tails. More generally, if I were to bet on the roll of a die and was told that the probability of any face coming up is 1/6, and the rewards for guessing correctly on any outcome are equal, then it would make no difference to me what face of the die I bet on. That is why the Bayesian point of view is sometimes called the Subjectivist point of view. In other words, Bayesians consider probability statements to be a measure of one's (personal) degree of belief in a certain hypothesis in the face of uncertainty - a subjective measure. The two points of view are widely differing and affect the way in which we conduct statistical inference. Allow me to elaborate. In statistics, we make an inference, a guess about a population based on a sample we draw from it. We may, for example, want to know what the speed of light in vacuum "really" is. [As reader Steve Dodge points out: "Since 1983, the speed of light has been a _defined quantity_, set at the integer value of 299 792 458 m/s. The meter is then defined as the distance light travels in vacuum after 1/299 792 458 s, and the second is defined in terms of an actual measurement of an atomic system, in an atomic clock." So let's assume that the following imaginary discussion takes place before 1983.] We have a problem, however: our experiment is imperfect and random errors will always crop up in our measurements, no matter how carefully we make them. So say we repeat our experiment five times and observe the following measurements on each experiment, in meters per second. 299,792,459.2 299,792,460.0 299,792,456.3 299,792,458.1 299,792,459.5 In this example, our population is the abstract infinity of all possible measurements we could make. Our sample is the five measurements we have made. Now we wish to estimate a parameter of this population, namely the population mean, or the "true" speed of light in a vacuum. How do we deal with the random errors? For a Frequentist, there exists a fixed, true, but unknown speed of light in vacuum. The Frequentist would assume that random errors have a certain probability distribution (probably normal distribution, also known as Gaussian, which looks like a bell curve) and would proceed to take the arithmetic average of the above five measurements. The resulting statistic (a statistic is a function of your sample) would be used as an estimator for the population mean. The estimator itself is a random variable, so we can say, as Frequentists, that If we were to repeat this sequence of 5 measurements a repeated number of times, approximately this many realizations of my estimator will be this close to the true speed of light. However, on this particular occasion where I have already calculated my statistic, I have no clue how close I actually am to the true value, but I feel comfortable that I am doing okay because of certain properties that my estimator has on repeated uses. For a Bayesian, the above paragraph is nonsense. The Bayesian DOES have a clue how close this particular realization of his estimator to the speed of light, because, unlike the Frequentist, she can make a probability statement about this realization. The random errors have no probability distribution. They are fixed realizations; they are reality. Instead, a Bayesian claims that the speed of light is a random variable with its own probability distribution. For a Bayesian there is no "true" speed of light; there is only a certain probability distribution associated with it. In Bayesian statistical inference, we first make a guess on what the probability distribution of the parameter in question is. This is called a prior distribution. Then, we observe our sample. Based on our observations, we use a theorem called Bayes' theorem (hence the name for a Bayesian) and modify our guess about what the distribution of the parameter in question is. This modified guess is called a posterior distribution. Summing it up, Bayesians and Frequentists give opposite answers to the question: "Does there exist a true fixed and nonrandom population paremeter, even if we cannot know its value because all we can see is the realizations of SOME random variable?" Frequentists say yes; Bayesians say no. Does this answer your question? Please write back if you have other questions or if you feel that I did not explain myself well enough. - Doctor Jordi, The Math Forum http://mathforum.org/dr.math/ |
Search the Dr. Math Library: |
[Privacy Policy] [Terms of Use]
Ask Dr. Math^{TM}
© 1994-2013 The Math Forum
http://mathforum.org/dr.math/