360 likes | 562 Vues
So Far. Issues in psychological measurement: standardisation/Z scores Introduction to Linear Regression. Today. Examples of simple/bivariate linear regression Predictions using Z scores Predictions using raw scores. Predictions with Z Scores.
E N D
So Far • Issues in psychological measurement: • standardisation/Z scores • Introduction to Linear Regression
Today • Examples of simple/bivariate linear regression • Predictions using Z scores • Predictions using raw scores
Predictions with Z Scores • A predicted Z score on criterion variable (DV) found by multiplying a particular number (called regression coefficient) by score on predictor variable (IV) • Standardised regression coefficient (β weight) Predicted ZY = (β)(ZX) Where x is known score on predictor variable X
Preliminary Example • For HW β for predicting 3rd yr average from 1st yr marks is .3, and Bob has 1st yr marks 2 sd’s above mean (Z = 2) • Predicted 3rd year average is Predicted ZY =(β)(ZX) = (.3)(.2) = .6 [Z of 0.6 = nearly 1 sd > mean] • But how do we calculate β in first place?
Calculating β • Simple in bivariate prediction with Z scores: • β = r or the correlation coefficient • So can predict a score using correlation coefficient and Z score
Another Example • Correlation between no. employees supervised (predictor variable or IV) and manager’s stress (criterion variable or DV) is .88 ( r = .88) • Thus, β = .88 and to predict manager’s stress multiply .88 by Z score for no. employees supervised • If supervising 10 employees (mean 7, sd 2.37), convert to Z score (Z = (X – M)/SD) = 1.27 • Thus Predicted ZY =(β)(ZX) = (.88)(1.27) = 1.12 So manager supervising 10 employees predicted to have stress level more than 1sd above mean
The Term Regression • Regression derived from: • when less than perfect correlation between 2 variables … • …dv score is some fraction (to value of r) of predictor variable score… • …so dv Z score closer to its mean, that is, regresses towards a Z of 0.
Predictions from Raw Scores • To get actual scores, can either start with raw, convert to Z, make prediction, then convert answer back to raw score • OR • Use raw score prediction formula
Raw Score Formula • Raw score formula Y = a + bX • Where Y = dv/criterion • X = predictor • b = raw score regression coefficient (similar to β but b with raw scores is not same as r) • a = regression constant; added to predicted value with raw scores to account for means of raw score distributions (with Z scores is not needed as variables always have a mean of 0)
Raw Score Formula contd. Y = a + bX • We know X, but how do we calculate a and b? • From β (= correlation coefficient r) b= (r)(SDy/SDx) a = My – (b)(Mx)
Example • X = no. employees supervised, Y = stress level • Say we have all scores and Mx = 7 employees, SDx = 2.37, My = 6, SDy = 2.61, r = .88 • Using b= (r)(SDy/SDx), a = My – (b)(Mx) • b = (.88)(2.61/2.37) = .97, a = 6 – (97)(7) = -.79 • Using Y = a + bX, predicted stress for 10 employees • Y = -.79 + (.97)(10) = 8.91
Example contd. • Raw score regression coefficient b of .97 means every increase in 1 person supervised associated with increase in .97 points on predicted manager stress level. • Regression constant a of -.79 means that predicted stress score also adjusted by subtracting .79 on stress scale (regardless of no. employees).
Regression Line • Graph: Predictor (X) on x-axis, criterion (Y) on y-axis • Regression line – ‘line of best fit’ on scattergram • Follow regression line to see predicted stress levels for any no. employees supervised • Steepness of angle of regression line = slope • Slope = b (raw score regression coefficient), or amount line moves up for every unit across • Pick any value of Y, compute corresponding X and mark on graph. • Do twice, then draw a line through = regression line • Check: find 3rd point, each unit across goes up b?
Proportionate Reduction in Error • How accurate are predictions? Can estimate by: • Consider how accurate model would have been if had used it to make predictions in first place • First manager supervised 6 people, stress level 7 • With our model (Y = a + bX) • Y = -.79 + (.97)(X) = predicted stress 5.03 • Do for all cases, put in table, calculate error between actual and predicted, then error² to offset negative and positive error scores
Graphing Errors • Can interpret errors on graph • Regression line = predicted scores, draw actual scores as dots • This way error represented by vertical distance between actual score and regression line
Using Error to Back up Model • How useful is squared error though? • How much better is our model than predicting without it? • Compare sum of squared errors of our model (SSE) with ‘predictions from the mean’ (difference between actual score and mean) • Mean Y (stress level) = 6
Predicting From Mean contd. • Proportionate reduction in error = SST – SSE/SST = 34 – 7.96 / 34 = .77 • PRE always = r². r = .88, thus r² = .77 • Also known as proportion of variance accounted for by the prediction model • What is good value?
Graphing PRE • Could illustrate on graph • Predictions from the mean represented by horizontal line • Proportionate reduction in error = extent to which accuracy of regression line greater than horizontal line
So Far • Questions on regression?
Multiple Regression • Could use additional predictor variables, such as working conditions and deadlines to be met monthly • More than 1 predictor can allow more accurate prediction of stress level • Association between dv and 2+ predictors = multiple association • Predicting dv from 2+ predictors = multiple regression • Each predictor multiplied by own regression coefficient, results added to make prediction (+ add in regression constant a with raw scores)
Multiple Regression with Z Scores • E.g with 3 variables • Predicted ZY = (β1)(ZX1) + (β2)(ZX2) + (β3)(ZX3) • Whereβ1 is standardised regression coefficient for first predictor etc, ZX1 is Z score for first predictor etc
Example • So multiple regression model for stress level (Y) employing the predictor variables of no. employees supervised (X1), noise level (X2), and no. deadlines per month (X3) might be Predicted ZY = (.51)(ZX1) + (.11)(ZX2) = (.33)(ZX3)
Example contd. • Then were asked to predict stress level of manager with Z score 1.27 for no. employees supervised, -1.81 for noise, .94 for no. deadlines Predicted ZY = (.51)(1.27) + (.11)(-1.81) = (.33)(.94) = .65 + -20 + .31 = .76 So would predict a stress score of .76 (3/4 sd above the mean)
Multiple vs Bivariate • In MULTIPLE regression, β for predictor variable NOT same as correlation coefficient r • In most cases will be lower, as in multiple regression β related to unique contribution of the variable excluding any overlap with other predictors
Multiple Regression with Raw Scores • With 3 predictors • Predicted Y = a + (b1)(X1) + (b2)(X2) + (b3)(X3) • Multiple correlation coefficient R describes overall correlation between predictor variables together and DV • R will be at least as high as highest individual correlation of a predictor with DV.
Multiple Regression with Raw Scores contd. • But since each predictor usually overlaps with others in association with DV, R usually less than sum of each predictor’s correlation with DV • R and r can never be higher than 1 • Unlike r, R cannot be negative • R² = proportion of variance in DV (criterion) accounted for by our model • This can overestimate in real world, so SPSS calculates adjusted R² value taking account of no. variables in model and no observations model based on
Multiple Regression: PRE • Error calculated by taking actual score minus predicted score • Squared error (SSE) calculated for our model, sum of squared errors (SST) calculated for mean model (using mean of DV to predict DV) • Proportionate reduction in error = SST – SSE/SST = R² (proportion of variance in DV accounted for (predicted) by set of predictors
Issues with Regression • Degree of predictability underestimated if underlying relationship curvilinear, sample restricted in range, or measures unreliable • Prediction models tell us nothing about causality or underlying direction of causality • Relative importance of predictor variables in predicting DV judged by b (raw) or β (Z), beta-weights • because theseshowunique contribution of predictor to DV after considering all other predictors
Issues with Regression contd. • But high correlations between predictors (multicollinearity) makes separating out relative effects more difficult • SPSS provides means for checking this as are different ways in which contribution of each predictor variable assessed • In simultaneous method (SPSS calls Enter method), we specify set of predictor variables that make up model
Issues with Regression contd. 1 • In contrast hierarchical methods enter variables into the model in a specified order reflecting theory or previous findings i.e if one variable likely to be more important than another • In statistical methods order in which predictors entered determined by strength of correlation with DV: • In forward selection SPSS enters one at a time according to strength of correlation, variables not contributing are excluded.
Issues with Regression contd. 2 • In backwards selection SPSS enters all variables, weakest removed then regression recalculated, repeated until only useful predictors remain • Stepwise most sophisticated; each variable entered and value assessed. If adding variable contributes to model then is retained, but all others retested to see if still contributing to success of model. If not are, removed. Results with smallest possible set of predictors. • Remove method removes variables in block • With no model in mind, safest to use Enter method.
Summary • Examples of simple/bivariate linear regression • Predictions using Z scores • Predictions using raw scores
Reading • Linear Regression (prediction) Aron, A., & Aron, E.N. (1994). ‘Statistics for Psychology’. Prentice Hall International Editions. Ch4. [519 .502 415 ARO] • Other books on regression in library
For Tutorial • Ideas for class project • Predict what behaviour from what measure • Remember • criterion variable (DV) or variable you want to predict should be on continuous scale • predictor variable (IV) should be on continuous scale (or dichotomous nominal variable e.g M vs F)