>First question. If the AP curriculum does not included type I and type >II errors, then how would one describe significance levels? Given that >alpha is the probability of a type I error, it is possible to discuss >alpha without bringing errors into the discussion? > The P-value measures the strength of the evidence against the Ho. The smaller the P-value the stronger the evidence against Ho. But then the question is "How small does the P-value have to be before I decide to reject the Ho ?" That's where alpha comes into play. It gives us a rule or a "line in the sand" so that if the P-value is smaller than alpha we decide to reject the Ho. Its a theshold of "smallness" for P-values.
>Second. I get the impression that often tests of significance are done >without setting an alpha level. If that is the case, how does one really >make a decision based on the P-value?
One can report the results of a significance test by giving the P-value. The reader can then decide whether they think there is sufficient evidence to reject the Ho. However to make this decision the reader must have an alpha in mind.
> No matter how small P may be, >there is still a chance that we will be wrong if we conclude a >significant effect is present.
Any time you make an inference based on a sample there is a possibility of error. Whether you use P-values or fixed level test won't affect that.
>Last. I would like comments on the following multiple-choice question. >The question assumes no preset significance level. > > > Q: A significance test gives a P-value of .04. From this we can say: > > A. We will fail to reject H0 4% of the time > B. We will reject H0 4% of the time > C. If we reject H0, our decision has a 4% chance of being >incorrect > D. If we reject H0, our decision has a 4% chance of being correct > E. None of the above are true
I would choose E. The P-value = 0.04 means that if we are sampling from a population in which the Ho is true, then the chance of getting a test statistic as extreme as the one we observed is 0.04.