1 / 35

Multiple Regression Analysis: Part 1

Multiple Regression Analysis: Part 1. Correlation, Simple Regression, Introduction to Multiple Regression and Matrix Algebra. Background: 3 Aims of Research. Regression Defined :. Numerical Example. 25 CDs X = Marketing $’s Y = Sales Index

elpida
Télécharger la présentation

Multiple Regression Analysis: Part 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multiple Regression Analysis: Part 1 Correlation, Simple Regression, Introduction to Multiple Regression and Matrix Algebra

  2. Background: 3 Aims of Research Regression Defined:

  3. Numerical Example • 25 CDs • X = Marketing $’s • Y = Sales Index • Question: Can we predict sales by knowing marketing expenditures?

  4. Correlation The relationship between x and y… Or,

  5. Or visually…

  6. Given the relationship, we can predict y by developing the simple regression equation Predicted Score y’ = a = b = x = e = Actual Score

  7. Calculating parameter estimates If you have the correlation and standard deviations… If you do not… Once you have b, a is easy…

  8. Numerical Example with more stuff

  9. Partitioning Variance – What else? • Total Variation = SSy or SSTOT • What we cannot account for… • Actual y-scores minus predicted y-scores • y – y’ • Can square and sum to get SSRES • What we can account for • SSTOT – SSRES (a.k.a. SSREG) • Or… • Predicted y-scores minus mean of y (squared & summed) • Why?

  10. Calculating F, because we can

  11. Effect Size / Fit… Take our previously calculated F, 8.301 We can evaluate it at k, N – k – 1. The null hypothesis of this test is ___________________________________.

  12. Multiple Regression • Multiple Independent (predictor) variables • One Dependent (criterion) variable • Predicted Score • y’ = a + b1x1 + b2x2 + … + bkxk • Actual Score • yi = a + b1x1 + b2x2 + … + bkxk + ei

  13. Numerical Example • N = 25 Participants (CDs) • X1: Marketing Expenditures • X2: Airplay/Day • Y: Sales Index • Question: Can the two pieces of information, Marketing Expenditures and Airplay be used in combination to predict CD Sales?

  14. Selected SPSS Output (1)

  15. Selected SPSS Output (2) Notice the change in b for Marketing!

  16. The equations introduced previously can be extended to the two IV case • Involves finding six SS terms • SSX1, SSX2, SSX1&SSX2, SSY, SSX1&Y, SSX2&Y • Must also calculate • Two b-weights • Two beta weights • Correlation between X1 and X2 • Then SS for Regression, Residual and Total • Then significance tests for each b-weight In general, it is a pain in the backside.

  17. For Instance, to obtain b1 & b2… Note: this is from a different example…, mileage may vary for the current example.

  18. Which is why matrix algebra is our friend • There’s only one equation to get the Standardized Regression Weights • Bi= Rij-1Riy • Then another one to get R2 • R2 = RyiBi • And so on. So, let’s take a joyride through the wonderful world of Matrix Algebra

  19. First, some definitions • For us, matrix algebra is a set of operations that can be carried out on a group of numbers (a matrix) as a whole. • A Matrix is denoted by a bold capital letter • Has R rows and C columns (thus has dimension of RxC) • R and/or C can be 1. • When R=1, the matrix is a row vector. • When C=1 it is a column vector • When both R and C are 1, it is a scalar (usually denoted by a small case bold letter). • Xij – X is a matrix and i represents the row and j the column. Thus, x31 refers to the element in the third row and first column.

  20. Example • The order of X is 5x2 • X31 = 3 X

  21. Some other important concepts • A is a diagonal matrix • I is an Identity Matrix A I

  22. Matrix Transpose • X is our 5x2 matrix previously introduced. • X’ is the transpose of X. X X’

  23. Matrix Addition Given two matrices, X and Y Then we can add the individual elements of X and Y to get T

  24. Similarly, Matrix Subtraction… Given the same two matrices, X and Y Then we can subtract the individual elements of X and Y to get D

  25. We can also use scalars w/matrices Here, I’ve subtracted a scalar, 9.2, from T. I could have also multiplied T by 0.5 to get a matrix of means. The value 9.2 happens to be the mean for each column, meaning we have centered the data within each column.

  26. Matrix Multiplication:As seen on T.V.! • Matrices must be conformable for multiplication • First matrix must have the same number of columns as the second matrix has rows. • The resulting matrix will be of order R1 x C2 • We then multiply away… • We multiply each element from the first row of the first matrix by the corresponding element of the first column of the second matrix. • Then we multiply each element from the first row of the first matrix by the corresponding element of the second column of the second matrix. • We continue until we run out of columns in the second matrix, and do it over again for the second row of the first matrix.

  27. Example If we take the transpose of C (C’) and post-multiply it by C, we could get a new matrix called SSCP. It would go like this. SSCP11 = (2.8 * 2.8)+(0.8*0.8)+(-1.2*-1.2)+(-1.2*-1.2)+(-1.2*-1.2) = 12.8 SSCP12 = (2.8 * 2.8)+(0.8*2.8)+(-1.2*-5.2)+(-1.2*-1.2)+(-1.2*0.8) = 16.8 SSCP21 = (2.8 * 2.8)+(2.8*0.8)+(-5.2*-1.2)+(-1.2*-1.2)+(0.8*-1.2) = 16.8 SSCP22 = (2.8*2.8)+(2.8*2.8)+(-5.2*-5.2)+(-1.2*-1.2)+(0.8*0.8) = 44.8

  28. SSCP, V-C & R Rearranging the elements into a matrix: Multiplying by a scalar, 1/(n-1): The above matrix is closely related to the familiar R

  29. Matrix Division:It just keeps getting better! • Matrix Division is even stranger than matrix multiplication. • You know most of what you need to know though, since it is accomplished through multiplying by an inverted matrix. • Finding the inverse is the tricky part. • We will do a very simple example.

  30. Inverses • Not all matrices have an inverse. • A matrix inverse is defined such that • XX-1=I • We need two things in order to find the inverse • 1. the determinant of the matrix we wish to take the inverse of, V-C in this case, which is written as |V-C| • 2. The adjoint of the same matrix, i.e. V-C, written adj(V-C)

  31. Determinant and Adjoint For a 2x2 matrix, V, the determinant is V11*V22 – V12*V21 |V-C| = 18.2 The adjoint is formed in the following way.

  32. Almost there… We then divide each element of the adjoint matrix by the determinant  Or,

  33. Checking our work… V-C*V-C-1= I X V-C*V-C-111 = 3.2*0.615+4.2*-0.231 = 1.968-0.972 ≈ 1.0 V-C*V-C-112 = 3.2*-0.231+4.2*0.176 = -.7392+.7392 = 0 V-C*V-C-121 = 4.2*0.615+11.2*-0.231 = 2.583-2.5872 ≈ 1.0 V-C*V-C-112 = 4.2*-0.231+11.2*0.176 = -0.9702+1.9712 ≈ 0

  34. Why we leave matrix operations to computers Finding the determinant of a 3 x 3 matrix: D = a(ei – fh) + b(fg – di) + c(dh – eg) Inverting the 3 x 3 matrix after solving for the determinant:

  35. So, why did I drag you through this?

More Related