Date: Feb 3, 2007 5:45 PM
Author: Scott
Subject: Laplace's rule of succession

In Bayesian statistics, Laplace's rule of succession attempts to solve
the problem of how we can predict that the sun will rise tomorrow,
given its past frequency of rising.


1. Let p be the long-run frequency, as observed.
2. Let n be the total number of trials.
3. Let s be the number of *successes* among these trials, so that n -
s is the number of failures.

The rule of succession states that the probability of the next success
is given by the *expected value of a normalized likelihood function*.
The likelihood function is

p^s * (1 - p)^(n - s).

Normalized with the integral S_{0 to 1}(p^s * (1 - p)^(n - s)) dp, one
obtains as the expected value

(s + 1)/(n + 2)

for the probability of the next success. Thus, if all we know is that
the sun has risen 2000 times, the probability of its rising again is

Now, I have a question. What's so special about this likelihood
function? It seems to be formulated completely ad hoc. If the sample
space were all possible successions, the probability of the next
success would simply be 1/2. So what gives?

The figure p^s * (1 - p)^(n - s) is the probability that there will be
s successes, with *fixed probability p* for each success, a
probability independent of the trial number. But how can we impose
this property on a sequence? How do we know that there are fixed
probabilities of success and failure on each trial?

Is Laplace's rule even accepted nowadays?

I would like to understand more of the philosophical theory behind the
choice and justification of the likelihood function. Thank you for
your help.