Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.stat.math.independent

Topic: Discrepancy in estimating goodness of Poisson GLM
Replies: 15   Last Post: Jan 18, 2012 8:17 PM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Eric Goodwin

Posts: 11
Registered: 1/15/12
Re: Discrepancy in estimating goodness of Poisson GLM
Posted: Jan 16, 2012 3:44 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

Thanks Rich, that makes sense.

Yes, pseudo R squared statistics were amongst other options, but I haven't looked at those as of yet. I was steered away from them by this paper (Zheng, B. and A. Agresti. 2000. Summarizing the predictive power of a generalized linear model. Statistics in Medicine 19: 1771-1781), which justified my calculation of the correlation between predicted and measured values. I also thought I'd see if I could clear up the reasons behind the apparently contradictory scores returned by the methods I'd tried so far - they seem to be highly polarised, as either saying the model's exceptionally good or total twaddle.

Regarding the number of possible explanatory variables that were offered to the backward variable selection stage, it was significantly fewer than the sample size (at 9 << 31). 4 of these 9 were chosen from ten related alternative measures by comparison of the explanatory power in univariate relationships, the remaining 5 of the 9 were rationally selected from a large list, without any regression, as being most likely to affect the response variable.

Your guidance that I should probably only keep the most significantly explanatory variables is useful.

Can you point me to any further reading on more appropriate methods of model selection than stepwise variable selection? I understood that it was common practice to use the AIC as an indication of the relative appropriateness of two models differing in the (number of) explanatory variables used. This almost implies nested models, where one is a stepwise modification of the other. I appreciate the caveats around the expected R^2 achievable from even random explanators when their number approaches that of the sample size, but is there any better alternative to the numerically enthusiastic ecologist?

Regards,

Eric



Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.