
Re: Reducing bias of a Bayesian point estimator
Posted:
Oct 13, 2012 6:45 PM


Bias is important. The quantity that I am estimating, gamma, is a variance that will be used to calculate confidence intervals. If the estimate of gamma is negatively biased, then the coverage of the confidence intervals will be too low.
What is an appropriate loss function to use under these circumstances?
On Saturday, October 13, 2012 5:23:14 PM UTC5, David Jones wrote: > "Paul" wrote in message > > I am interested in ways of reducing the bias of a point estimator when the > > true parameter is near the boundary of the parameter space. > > > > Suppose g = gamma U / (N1), where U ~ chisq(N1), N is a known small sample > > size, and gamma is an unknown parameter. A priori we know that 0 <= gamma < > > 1. Notice that the upper inequality is strict; that is, gamma cannot have a > > value of 1. > > > > One approach to estimation is to assign gamma a prior distribution that is > > uniform on (0,1). Then the posterior distribution of gamma is a scaled > > inverse chisquare, truncated on the right at 1. Now the obvious point > > estimators are the posterior mean and median. (I can?t use the mode because > > it can take a value of 1.) The trouble with the posterior mean and median is > > that they have large negative biases if the true value of gamma is actually > > close to 1. > > > > I?d be grateful for ideas on how to reduce this bias. One idea I?ve been > > toying with is to use a posterior quantile greater than the median ? i.e., > > quantile p where p>1/2. Maybe I would use a larger p when I had a larger g. > > This isn?t an idea that I?ve seen discussed elsewhere. Many thanks for any > > references on this or other possibilities. > > > > > >  > > > > (1) Why do you think "bias" is important? > > > > (2) If you want to define a point estimate in a Bayesian context, it would > > be best to define a realistic loss function for the actual situation and to > > use this to derive the corresponding "best" point estimate.

