Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.stat.math.independent

Topic: Reducing bias of a Bayesian point estimator
Replies: 13   Last Post: Oct 19, 2012 3:02 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
paulvonhippel at yahoo

Posts: 72
Registered: 7/13/05
Re: Reducing bias of a Bayesian point estimator
Posted: Oct 14, 2012 9:49 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

I am in fact interested in estimating gamma. I am using a Bayesian approach to account for the fact that gamma cannot exceed 1, but I would like the resulting estimate to have minimal bias in a frequentist sense.

The estimation of gamma is embedded in a more complicated problem, but I believe progress can be made by addressing the estimation of gamma alone. Many thanks for any suggestions regarding the problem as I originally posed it.

On Sunday, October 14, 2012 3:58:58 PM UTC-5, David Jones wrote:
> "Paul" wrote in message
>
> news:f18a81ba-069f-45fc-ad7c-2da6931ee06a@googlegroups.com...
>
>
>
>
>
> On Saturday, October 13, 2012 5:23:14 PM UTC-5, David Jones wrote:
>

> > "Paul" wrote in message
>
> >
>
> > I am interested in ways of reducing the bias of a point estimator when the
>
> >
>
> > true parameter is near the boundary of the parameter space.
>
> >
>
> >
>
> >
>
> > Suppose g = gamma U / (N-1), where U ~ chisq(N-1), N is a known small
>
> > sample
>
> >
>
> > size, and gamma is an unknown parameter. A priori we know that 0 <= gamma
>
> > <
>
> >
>
> > 1. Notice that the upper inequality is strict; that is, gamma cannot have
>
> > a
>
> >
>
> > value of 1.
>
> >
>
> >
>
> >
>
> > One approach to estimation is to assign gamma a prior distribution that is
>
> >
>
> > uniform on (0,1). Then the posterior distribution of gamma is a scaled
>
> >
>
> > inverse chi-square, truncated on the right at 1. Now the obvious point
>
> >
>
> > estimators are the posterior mean and median. (I can�t use the mode
>
> > because
>
> >
>
> > it can take a value of 1.) The trouble with the posterior mean and median
>
> > is
>
> >
>
> > that they have large negative biases if the true value of gamma is
>
> > actually
>
> >
>
> > close to 1.
>
> >
>
> >
>
> >
>
> > I�d be grateful for ideas on how to reduce this bias. One idea I�ve been
>
> >
>
> > toying with is to use a posterior quantile greater than the median � i.e.,
>
> >
>
> > quantile p where p>1/2. Maybe I would use a larger p when I had a larger
>
> > g.
>
> >
>
> > This isn�t an idea that I�ve seen discussed elsewhere. Many thanks for any
>
> >
>
> > references on this or other possibilities.
>
> >
>
> >
>
> >
>
> >
>
> >
>
> > -------------------------------
>
> >
>
> >
>
> >
>
> > (1) Why do you think "bias" is important?
>
> >
>
> >
>
> >
>
> > (2) If you want to define a point estimate in a Bayesian context, it
>
> > would
>
> >
>
> > be best to define a realistic loss function for the actual situation and
>
> > to
>
> >
>
> > use this to derive the corresponding "best" point estimate.
>
> -----------------------------------------
>
>
>
> Bias is important. The quantity that I am estimating, gamma, is a variance
>
> that will be used to calculate confidence intervals. If the estimate of
>
> gamma is negatively biased, then the coverage of the confidence intervals
>
> will be too low.
>
>
>
> What is an appropriate loss function to use under these circumstances?
>
>
>
> ----------------------------------------------------------------
>
>
>
> "Bias" is most often used in the context of an arithmetic mean. It is clear
>
> from these extra details that you are not actually concerned with estimating
>
> gamma. It is also somewhat confusing that you are contemplating mixing the
>
> classical and Bayesian paradigms. The phrase "is a variance that will be
>
> used to calculate confidence intervals" indicates that there are other
>
> statistics around, possibly statistically dependent on g, and these
>
> dependencies would need to be taken into account
>
>
>
> For a purely classical approach, you would need to evaluate the sampling
>
> distribution of some combination of a sample statistic and the selected
>
> estimate of gamma. Supposed bias in the estimate of gamma is unimportant
>
> because any such effects are eliminated by correctly evaluating the sampling
>
> distribution of the combined statistic, and in using this to derive the
>
> confidence interval. Clearly you wouldn't be expecting to use a Student's t
>
> distribution here. Of course using different combinations of sample
>
> statistics and different selected estimates of gamma would typically lead to
>
> different confidence intervals with different properties. If evaluation of
>
> the distribution can't be done analytically, then simulation or
>
> bootstrapping may be useful routes to a practical procedure.
>
>
>
> For a sensible Bayesian approach you would want to evaluating a credible
>
> interval, not a confidence interval, for your other parameter of interest.
>
> This would involve integrating the joint posterior distribution of the
>
> parameters with respect to gamma. Of course the answers here would depend a
>
> lot on the joint prior distribution of all the parameters and you would need
>
> to have good reasons for any assumptions. You didn't seem particularly
>
> convinced of the "uniform on (0,1)" distribution for just one of the
>
> parameters, and you would also need to consider dependence in the joint
>
> prior distribution.






Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.