Search All of the Math Forum:
Views expressed in these public forums are not endorsed by
NCTM or The Math Forum.


Math Forum
»
Discussions
»
sci.math.*
»
sci.stat.math
Notice: We are no longer accepting new posts, but the forums will continue to be readable.
Topic:
Simplifying a regression line
Replies:
14
Last Post:
Jun 23, 2011 1:27 PM




Re: Simplifying a regression line
Posted:
Jun 23, 2011 1:27 PM


In order to understand your question I generated an SPSS simulation of what I thought you were saying. I expanded the generating equations to have all coefficients and exponents explicit even when one or zero.
Does my simulation generate data analogous to yours? What distribution should x be drawn from? I used RV.normal(0,1). Should the same x value be used for all 6 "types"? Or as I did i.e., each case in the data has its own draw from the distribution?
Am I correct that you are trying to find cases with similar profiles (cluster) across xcubed to xtofirst? Or are you matching against known "types"?
The syntax could have been written more concisely but I wanted to be very explicit.
Art Kendall Social Research Consultants
SET SEED=20100723. NEW FILE. INPUT PROGRAM . LOOP id=1 to 50. leave id. compute type =1. COMPUTE x=RV.normal(0,1). compute xcubed = x**3. compute xsquared = x**2. compute xtofirst = x**1. compute xzeroeth = x**0. compute noise = rv.normal(0,.3). compute y = 0*xcubed + 0*xsquared  2*xtofirst + 1*xzeroeth + noise. end case. compute type =2. COMPUTE x=RV.normal(0,1). compute xcubed = x**3. compute xsquared = x**2. compute xtofirst = x**1. compute xzeroeth = x**0. compute noise = rv.normal(0,.3). compute y = 2*xcubed  0*xsquared + 0*xtofirst + 1*xzeroeth + noise. end case. compute type =3. COMPUTE x=RV.normal(0,1). compute xcubed = x**3. compute xsquared = x**2. compute xtofirst = x**1. compute xzeroeth = x**0. compute noise = rv.normal(0,.3). compute y = 1*xcubed  2*xsquared + 0*xtofirst + 1*xzeroeth + noise. end case. compute type =4. COMPUTE x=RV.normal(0,1). compute xcubed = x**3. compute xsquared = x**2. compute xtofirst = x**1. compute xzeroeth = x**0. compute noise = rv.normal(0,.3). compute y = 1*xcubed + 0*xsquared + 0*xtofirst + 0*xzeroeth + noise. end case. compute type =5. COMPUTE x=RV.normal(0,1). compute xcubed = x**3. compute xsquared = x**2. compute xtofirst = x**1. compute xzeroeth = x**0. compute noise = rv.normal(0,.3). compute y =2*xcubed  2*xsquared + 0*xtofirst + 1*xzeroeth + noise. end case. compute type =6. COMPUTE x=RV.normal(0,1). compute xcubed = x**3. compute xsquared = x**2. compute xtofirst = x**1. compute xzeroeth = x**0. compute noise = rv.normal(0,.3). compute y = 1*xcubed + 0*xsquared  2*xtofirst + 0*xzeroeth + noise. end case. END LOOP. LOOP id=101 to 200. leave id. compute type =7. COMPUTE x=RV.normal(0,1). compute xcubed = x**3. compute xsquared = x**2. compute xtofirst = x**1. compute xzeroeth = x**0. compute noise = rv.normal(0,.3). COMPUTE y=RV.normal(0,1). end case. end loop. END FILE. END INPUT PROGRAM. formats ID(f5) type (f1). execute. means tables = xcubed to xzeroeth by type/cell= all /statistics=all.
DISCRIMINANT /GROUPS=type(1 7) /VARIABLES=xcubed xsquared xtofirst /save class predtype /ANALYSIS ALL /PRIORS EQUAL /STATISTICS=univ coeff TABLE /PLOT=COMBINED SEPARATE MAP /PLOT=CASES /CLASSIFY=NONMISSING SEPARATE.
TWOSTEP CLUSTER /CONTINUOUS VARIABLES=xcubed xsquared xtofirst /DISTANCE LIKELIHOOD /NUMCLUSTERS FIXED=6 /HANDLENOISE 25 /MEMALLOCATE 64 /CRITERIA INITHRESHOLD(0) MXBRANCH(8) MXLEVEL(3) /VIEWMODEL DISPLAY=YES EVALUATIONFIELDS=type /SAVE VARIABLE=TSC_25.
crosstabs tables= tsc_25 by type predtype. sort cases by type. split file by type. regression variables = y xcubed to xtofirst /statistics = r coeff /dependent = y /enter = xcubed to xtofirst.
On 6/23/2011 8:53 AM, RossClement@gmail.com wrote: > On Jun 23, 8:24 am, Ray Koopman<koop...@sfu.ca> wrote: >> Look up Regression Mixture Modeling > Ah, thank you. > > Looks simple enough, with straight regression (no need for stepwise) > and ExpectationMaximisation to allocate the data to clusters. Code > coming up within a few days I'd hope. > > But, after reading a bit, I think I didn't do quite enough regular > "exploratory" regression on my data, so I'm going to do a bit more of > that first.



