Drexel dragonThe Math ForumDonate to the Math Forum



Search All of the Math Forum:

Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.


Math Forum » Discussions » sci.math.* » sci.stat.math.independent

Topic: Explanation for why linear regression is a poor fit
Replies: 8   Last Post: Feb 15, 2013 10:36 AM

Advanced Search

Back to Topic List Back to Topic List Jump to Tree View Jump to Tree View   Messages: [ Previous | Next ]
Richard Ulrich

Posts: 2,865
Registered: 12/13/04
Re: Explanation for why linear regression is a poor fit
Posted: Feb 4, 2013 6:13 PM
  Click to see the message monospaced in plain text Plain Text   Click to reply to this topic Reply

On Mon, 4 Feb 2013 12:55:36 -0800 (PST), em.derenne@gmail.com wrote:

>Hi-
>I haven't taken stats in a few years and recently there have been a lot thrown around my work place, including the attached graph (and raw data). I realize that low R2 mean that the linear regression is not a good fit, but it produces a p-value 0.025. I can't formulate a solid argument because I don't understand the material well enough. Am I incorrect in saying this is a poor fit? Even visually to me it looks like a poor fit. Additionally, he says things like: "FC Count at Samish River/Thomas Road: N = 498, r2 = 0.01, p = 0.025, meaning it is significant at 97.5% confidence" I know you can't use P-values to describe stats like this. I need help explaining why this data isn't showing a significant declining trend with a linear regression (in less of course I am incorrect.)
>
>Thanks for clarification and help.
>
>Data and Graph: http://dl.dropbox.com/u/18470470/Copy%20of%20Regression%20Correlation%20info.xlsx
>


Oh, there's a declining trend, of sorts. I'm tempted to say
that Nobody calls R^2 of 0.01 a large effect -- but that's
not really true. Context rules.

The highest 5 outcome scores are all in the first half of the graph,
and the very highest one is near the beginning. Does that
seem important? That's most of the effect.

On the other hand, very few people would say that any
time series is properly tested by a simply linear regression
when there are autocorrelation effects... which there almost
always are.

As to the size of the effect, and how few cases it depends
on -- I'm "pretty sure" that the trend becomes n.s. if you
remove the top 5 points; "probably" for the top 3, and
"maybe" for removing the top one alone.

--
Rich Ulrich




Point your RSS reader here for a feed of the latest messages in this topic.

[Privacy Policy] [Terms of Use]

© Drexel University 1994-2014. All Rights Reserved.
The Math Forum is a research and educational enterprise of the Drexel University School of Education.