1 / 51

Linear regression models in matrix terms

Linear regression models in matrix terms. The regression function in matrix terms. for i = 1,…, n. Simple linear regression function. Simple linear regression function in matrix notation. Definition of a matrix.

duff
Télécharger la présentation

Linear regression models in matrix terms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Linear regression modelsin matrix terms

  2. The regression function in matrix terms

  3. for i = 1,…, n Simple linear regression function

  4. Simple linear regression function in matrix notation

  5. Definition of a matrix An r×c matrix is a rectangular array of symbols or numbers arranged in r rows and c columns. A matrix is almost always denoted by a single capital letter in boldface type.

  6. Definition of a vector and a scalar A column vector is an r×1 matrix, that is, a matrix with only one column. A row vector is an 1×c matrix, that is, a matrix with only one row. A 1×1 “matrix” is called a scalar, but it’s just an ordinary number, such as 29 or σ2.

  7. Matrix multiplication • The Xβ in the regression function is an example of matrix multiplication. • Two matrices can be multiplied together only if: • the # of columns of the first matrix equals the # of rows of the second matrix. • Then: • # of rows of the resulting matrix equals # of rows of first matrix. • # of columns of the resulting matrix equals # of columns of second matrix.

  8. Matrix multiplication • If A is a 2×3 matrix and B is a 3×5 matrix then matrix multiplication AB is possible. The resulting matrix C = AB has … rows and … columns. • Is the matrix multiplication BA possible? • If X is an n×p matrix and β is a p×1 column vector, then Xβ is …

  9. Matrix multiplication The entry in the ith row and jth column of C is the inner product (element-by-element products added together) of the ith row of A with the jthcolumn of B.

  10. The Xβ multiplication in simple linear regression setting

  11. Matrix addition • The Xβ+ε in the regression function is an example of matrix addition. • Simply add the corresponding elements of the two matrices. • For example, add the entry in the first row, first column of the first matrix with the entry in the first row, first column of the second matrix, and so on. • Two matrices can be added together only if they have the same number of rows and columns.

  12. For example: Matrix addition

  13. The Xβ+ε addition in the simple linear regression setting

  14. Multiple linear regression functionin matrix notation

  15. Least squares estimates of the parameters

  16. Least squares estimates The p×1 vector containing the estimates of the p parameters can be shown to equal: where (X'X)-1 is the inverse of the X'X matrix and X' is the transpose of the X matrix.

  17. Definition of the transpose of a matrix The transpose of a matrix A is a matrix, denoted A' or AT, whose rows are the columns of A and whose columns are the rows of A … all in the same original order.

  18. The X'X matrix in the simple linear regression setting

  19. Definition of the identity matrix The (square) n×nidentity matrix, denoted In, is a matrix with 1’s on the diagonal and 0’s elsewhere. The identity matrix plays the same role as the number 1 in ordinary arithmetic.

  20. Definition of the inverse of a matrix The inverseA-1 of a square (!!) matrix A is the unique matrix such that …

  21. soap suds so*su soap2 4.0 33 132.0 16.00 4.5 42 189.0 20.25 5.0 45 225.0 25.00 5.5 51 280.5 30.25 6.0 53 318.0 36.00 6.5 61 396.5 42.25 7.0 62 434.0 49.00 --- --- ----- ----- 38.5 347 1975.0 218.75 Find X'X. Least squares estimates in simple linear regression setting

  22. Find inverse of X'X. Least squares estimates in simple linear regression setting It’s very messy to determine inverses by hand. We let computers find inverses for us. Therefore:

  23. soap suds so*su soap2 4.0 33 132.0 16.00 4.5 42 189.0 20.25 5.0 45 225.0 25.00 5.5 51 280.5 30.25 6.0 53 318.0 36.00 6.5 61 396.5 42.25 7.0 62 434.0 49.00 --- --- ----- ----- 38.5 347 1975.0 218.75 Find X'Y. Least squares estimates in simple linear regression setting

  24. The regression equation is suds = - 2.68 + 9.50 soap Least squares estimates in simple linear regression setting

  25. The columns of the matrix: are linearly dependent, since (at least) one of the columns can be written as a linear combination of another. Linear dependence If none of the columns can be written as a linear combination of another, then we say the columns are linearly independent.

  26. Linear dependence is not always obvious Formally, the columns a1, a2, …, an of an n×n matrix are linearly dependent if there are constants c1, c2, …, cn, not all 0, such that:

  27. Implications of linear dependence on regression • The inverse of a square matrix exists only if the columns are linearly independent. • Since the regression estimate b depends on (X'X)-1, the parameter estimates b0, b1, …, cannot be (uniquely) determined if some of the columns of X are linearly dependent.

  28. The main point about linear dependence • If the columns of the X matrix (that is, if two or more of your predictor variables) are linearly dependent (or nearly so), you will run into trouble when trying to estimate the regression function.

  29. soap1 soap2 suds 4.0 8 33 4.5 9 42 5.0 10 45 5.5 11 51 6.0 12 53 6.5 13 61 7.0 14 62 * soap2 is highly correlated with other X variables * soap2 has been removed from the equation The regression equation is suds = - 2.68 + 9.50 soap1 Implications of linear dependenceon regression

  30. Fitted values and residuals

  31. Fitted values

  32. The vector of fitted values is sometimes represented as a function of the hat matrix H That is: Fitted values

  33. for i = 1,…, n The residual vector

  34. The residual vector written as a function of the hat matrix

  35. Sum of squares and the analysis of variance table

  36. Analysis of variance table in matrix terms

  37. Sum of squares In general, if you pre-multiply a vector by its transpose, you get a sum of squares.

  38. Error sum of squares

  39. Error sum of squares

  40. But, it can be shown that equivalently: where J is a (square) n×nmatrix containing all 1’s. Total sum of squares Previously, we’d write:

  41. But, note that we get the same answer by: An example oftotal sum of squares If n = 2:

  42. Analysis of variance table in matrix terms

  43. Model assumptions

  44. Error term assumptions • As always, the error terms εi are: • independent • normally distributed (with mean 0) • with equal variances σ2 • Now, how can we say the same thing using matrices and vectors?

  45. Error terms as a random vector The n×1 random error term vector, denoted as ε, is:

  46. The n×1mean error term vector, denoted as E(ε),is: The mean (expectation) of the random error term vector Definition Assumption Definition

  47. The variance of the random error term vector The n×nvariance matrix, denoted as σ2(ε),is defined as: Diagonal elements are variances of the errors. Off-diagonal elements are covariances between errors.

  48. The ASSUMED variance of the random error term vector BUT, we assume error terms are independent (covariances are 0), and have equal variances (σ2).

  49. For example: Scalar by matrix multiplication Just multiply each element of the matrix by the scalar.

  50. The ASSUMED variance of the random error term vector

More Related