Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
Drexel University or The Math Forum.


Paul
Posts:
359
Registered:
2/23/10


Why multiply t by sample std dev?
Posted:
Sep 30, 2012 3:55 PM


A stats book I'm using describes confidence intervals for estimating a population mean assuming: (i) normally distributed population, (ii) small sample size, and (iii) unknown population standard deviation. They use the tdistribution. Depending on whether they are doing a 2 tail or 1tail test, they find the proper spot under the t distribution, take the tvalue and multiply by the sample standard deviation to get the offset from the sample mean that defines the confidence interval.
I am confused by the part about multiplying the tvalue by the sample standard deviation. If it was the *normal* distribution, we multiply the zvalue by the sample standard deviation because the normal distribution has a standard deviation of 1. So multiplying by the sample standard deviation is simply rescaling the horizontal axes. However, the standard deviation for the tdistribution is sqrt[df/ (df2)]. To do similar rescaling, shouldn't the tvalue be divided by sqrt[df/(df2)] to get it in terms of standard deviations in t (after all, the zvalue is basically in terms of the standard deviation under the normal distribution), *then* multipled by the sample standard deviation to get it in terms of the units of measure for the random variable?



