Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
Drexel University or The Math Forum.


Paul
Posts:
309
Registered:
2/23/10


Re: chisquare in confidence interval for failure rate
Posted:
Mar 16, 2013 11:44 PM


On Mar 16, 10:41 pm, Paul wrote: >On Mar 16, 10:09 pm, Paul wrote: >> All the info I found online for estimating failure rate from the >> number of failures over a time period make reference to chisqure, >> but without actually describing how it comes about. The most >> helpful explanation I found is >> http://www.weibull.com/hotwire/issue116/relbasics116.htm. >> >> However, I'm missing some fundamental intuition behind the starting >> point (equation 5). It is assumed that we see r failures over a >> time period T. The probabilities for 0 to r failures are summed up >> and related to the confidence level. Why is this equation true? I >> mean, we do not see 0 to r failures, we only see r failures, so why >> so those lesser failure counts come into the picture at all? >> >> Thanks if anyone can refer to an online explanation. If it is >> offline, I can probably get it eventually, but online would be so >> much more suitable for my timeframe. > > I tried to plug in typical numbers to lend a bit of concreteness to > the equations. Assume a confidence level of 95%, so the onesided > tail is 5%. Assume that we saw 5 failures over T=1 hour. What > equation 5 says is that There is a 5% chance of seeing 0 to 5 > failures in 1 hours. Somehow, that condition is satifisfied by the > upperbound failure rate. I'm having a hard time seeing why. > > This is not quite the same as the typical hypothesis testing > problems that I've seen (though admittedly, I'm relatively new to > that territory as well). In textbook hypothesis testing, one > usually has an H0 that occupies a point on a real line that > represents the possible values of the test statistic e.g. the > estimated mean. The acceptance an rejection region is perfectly > obvious. > > In the above problem, it seems like the test statistic is the number > of failures r, and it isn't clear to me what H0 is and why the > region [0,r] corresponds to the rejection region. This is all > assuming that I'm not barking up the wrong tree by drawing analogies > with hypothesis testing.
I think part of the problem I'm having is that a probability of lambda.T falling in a certain range is equated to a sum of probabilities involving failure count r. Instead, it seems that what we should be seeking is lambda such that:
p( lambta.T < UpperBound  r=5 ) = 95%
which requires integration of pdf(lambda.Tr=5) from 0 to UpperBound. Since UpperBound is not yet known, the integral needs to be closed form. Possibly reexpressed in terms of the Poisson distribution using Baye's theorem.
Does this make sense to the gray beards in the field, regardless of its tractability? If so, what kind of inaccuracies are introduced by the weibull.com formula in my original post (if any)? That approach seems to be endemic in the reliability field, and I'm assuming it's because the idealdriven formula above would not be tractable (assuming that it's correct, of course).



