On Sat, 22 Dec 2012 21:27:54 -0800 (PST), Ray Koopman <firstname.lastname@example.org> wrote:
>On Dec 22, 5:22 pm, czytaczg...@gmail.com wrote: >> [...] >> >> according to the KS test they come from the same distribution: > >NO. You misunderstand the logic of hypothesis testing. Failing to >reject a hypothesis does not mean that it is true or that you should >act as if it were true. It means only that, in the way that the test >looks at data, your data are not inconsistent with the hypothesis. >Other tests, that look at the data differently, may well disagree.
I'll just add that these so-called non-parametric tests are based on ranks, and their usual tests are calculated on the basis of "no ties" -- That certainly does not characterize these data, with 50 scores from 1 to 5. It is conceivable that a Monte-carlo test of KS, done by generating 10,000 samples with the same margins, would show that the KS test-outcome *is* an unusual one. Or it might not.
It seems to me, though I'm not entirely sure, that the KS test is fundamentally testing the number of "interchanges" in ranks (which is a linear metric), whereas the other two tests are measuring the squared differences in ranks. So, he tests may disagree because they are testing two different ways to measure the non-fit.
For these data, I would be willing to report the means as meaningful ... and thus, using that as a guide, I would be willing to use the ordinary t-test for comparison.