Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.


Math Forum
»
Discussions
»
sci.math.*
»
sci.stat.math
Notice: We are no longer accepting new posts, but the forums will continue to be readable.
Topic:
Reducing bias of a Bayesian point estimator
Replies:
13
Last Post:
Oct 19, 2012 3:02 AM




Re: Reducing bias of a Bayesian point estimator
Posted:
Oct 14, 2012 9:49 PM


I am in fact interested in estimating gamma. I am using a Bayesian approach to account for the fact that gamma cannot exceed 1, but I would like the resulting estimate to have minimal bias in a frequentist sense.
The estimation of gamma is embedded in a more complicated problem, but I believe progress can be made by addressing the estimation of gamma alone. Many thanks for any suggestions regarding the problem as I originally posed it.
On Sunday, October 14, 2012 3:58:58 PM UTC5, David Jones wrote: > "Paul" wrote in message > > news:f18a81ba069f45fcad7c2da6931ee06a@googlegroups.com... > > > > > > On Saturday, October 13, 2012 5:23:14 PM UTC5, David Jones wrote: > > > "Paul" wrote in message > > > > > > I am interested in ways of reducing the bias of a point estimator when the > > > > > > true parameter is near the boundary of the parameter space. > > > > > > > > > > > > Suppose g = gamma U / (N1), where U ~ chisq(N1), N is a known small > > > sample > > > > > > size, and gamma is an unknown parameter. A priori we know that 0 <= gamma > > > < > > > > > > 1. Notice that the upper inequality is strict; that is, gamma cannot have > > > a > > > > > > value of 1. > > > > > > > > > > > > One approach to estimation is to assign gamma a prior distribution that is > > > > > > uniform on (0,1). Then the posterior distribution of gamma is a scaled > > > > > > inverse chisquare, truncated on the right at 1. Now the obvious point > > > > > > estimators are the posterior mean and median. (I canï¿½t use the mode > > > because > > > > > > it can take a value of 1.) The trouble with the posterior mean and median > > > is > > > > > > that they have large negative biases if the true value of gamma is > > > actually > > > > > > close to 1. > > > > > > > > > > > > Iï¿½d be grateful for ideas on how to reduce this bias. One idea Iï¿½ve been > > > > > > toying with is to use a posterior quantile greater than the median ï¿½ i.e., > > > > > > quantile p where p>1/2. Maybe I would use a larger p when I had a larger > > > g. > > > > > > This isnï¿½t an idea that Iï¿½ve seen discussed elsewhere. Many thanks for any > > > > > > references on this or other possibilities. > > > > > > > > > > > > > > > > > >  > > > > > > > > > > > > (1) Why do you think "bias" is important? > > > > > > > > > > > > (2) If you want to define a point estimate in a Bayesian context, it > > > would > > > > > > be best to define a realistic loss function for the actual situation and > > > to > > > > > > use this to derive the corresponding "best" point estimate. > >  > > > > Bias is important. The quantity that I am estimating, gamma, is a variance > > that will be used to calculate confidence intervals. If the estimate of > > gamma is negatively biased, then the coverage of the confidence intervals > > will be too low. > > > > What is an appropriate loss function to use under these circumstances? > > > >  > > > > "Bias" is most often used in the context of an arithmetic mean. It is clear > > from these extra details that you are not actually concerned with estimating > > gamma. It is also somewhat confusing that you are contemplating mixing the > > classical and Bayesian paradigms. The phrase "is a variance that will be > > used to calculate confidence intervals" indicates that there are other > > statistics around, possibly statistically dependent on g, and these > > dependencies would need to be taken into account > > > > For a purely classical approach, you would need to evaluate the sampling > > distribution of some combination of a sample statistic and the selected > > estimate of gamma. Supposed bias in the estimate of gamma is unimportant > > because any such effects are eliminated by correctly evaluating the sampling > > distribution of the combined statistic, and in using this to derive the > > confidence interval. Clearly you wouldn't be expecting to use a Student's t > > distribution here. Of course using different combinations of sample > > statistics and different selected estimates of gamma would typically lead to > > different confidence intervals with different properties. If evaluation of > > the distribution can't be done analytically, then simulation or > > bootstrapping may be useful routes to a practical procedure. > > > > For a sensible Bayesian approach you would want to evaluating a credible > > interval, not a confidence interval, for your other parameter of interest. > > This would involve integrating the joint posterior distribution of the > > parameters with respect to gamma. Of course the answers here would depend a > > lot on the joint prior distribution of all the parameters and you would need > > to have good reasons for any assumptions. You didn't seem particularly > > convinced of the "uniform on (0,1)" distribution for just one of the > > parameters, and you would also need to consider dependence in the joint > > prior distribution.



