A stats book I'm using describes confidence intervals for estimating a population mean assuming: (i) normally distributed population, (ii) small sample size, and (iii) unknown population standard deviation. They use the t-distribution. Depending on whether they are doing a 2- tail or 1-tail test, they find the proper spot under the t- distribution, take the t-value and multiply by the sample standard deviation to get the offset from the sample mean that defines the confidence interval.
I am confused by the part about multiplying the t-value by the sample standard deviation. If it was the *normal* distribution, we multiply the z-value by the sample standard deviation because the normal distribution has a standard deviation of 1. So multiplying by the sample standard deviation is simply rescaling the horizontal axes. However, the standard deviation for the t-distribution is sqrt[df/ (df-2)]. To do similar rescaling, shouldn't the t-value be divided by sqrt[df/(df-2)] to get it in terms of standard deviations in t (after all, the z-value is basically in terms of the standard deviation under the normal distribution), *then* multipled by the sample standard deviation to get it in terms of the units of measure for the random variable?