Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.


Luis A. Afonso
Posts:
4,758
From:
LIsbon (Portugal)
Registered:
2/16/05


No Parametric inference: Resampling Technics Criticism (I)
Posted:
Aug 28, 2013 7:03 PM


No Parametric inference: Resampling Technics Criticism (I)
In ClassicalFrequencist Statistical is not new the criticism about No Parametric Metods (sometimes confounded with Distribution.Free). In Reference to a recent note: Yu, Chong Ho (2003). Resampling methods: concepts, applications, and justification : pareonline.net/getvn.asp?v=8&n=19 Quoting: Criticisms of resampling
1 Assumption: Stephen E. Fienberg mocked resampling by saying, "You're trying to get something for nothing. You use the same numbers over and over again until you get an answer that you can't get any other way. In order to do that, you have to assume something, and you may live to regret that hidden assumption later on" (cited in Peterson, 1991, p. 57). Every theory and procedure is built on certain assumptions and requires a leap of faith to some degree. Indeed, the classical statistics framework requires more assumptions than resampling does. My Comment : The idea that procedures, whatever, go require some ammount of inductive adition (faith) is lucid, I mean. I feel that the information a sample provide cannot be
2 Generalization: Some critics argued that resampling is based on one sample and therefore the generalization cannot go beyond that particular sample. One critic even went further to say, "I do wonder, though, why one would call this (resampling) inference?" (cited in Ludbrook & Dudley, 1998) Nevertheless, Fan and Wang (1996) stated that assessing the stability of test results is descriptive, not inferential, in nature. My : Of course, all inference is based on data, is precisely what happens with Parametric methods. There is no means to correct the distrortion the especific nature every sample shows. Samples, even random, do carry a sui generis (different) point of vue in regard as they represent the Population´s parameters values. Suppose you obtain (randomly) a size n sample. If you form all k < n subsamples you will find that each one of the nCk of them does provide a (potentionally) different sample mean, and also different of the source nsized sample. However these samples will show some kind of (positve ?) correlation.The Population mean, mu, in spite to have a fixed value, is estimated, by different values. By the other hand the larger the samples the less expectable a stated difference: observed mean value minus mu (in absolute value). Luis A. Afonso



