1 / 46

General Linear Model

General Linear Model. With correlated error terms S = s 2 V ≠ s 2 I. The General Linear Model S ≠ s 2 I. Summary. Example. Simple Linear Model where variance is proportional to X 2. Testing and Confidence Intervals. The Model:. can be converted to the model. Thus.

Télécharger la présentation

General Linear Model

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. General Linear Model With correlated error terms S=s2V ≠ s2I

  2. The General Linear ModelS≠s2I

  3. Summary

  4. Example Simple Linear Model where variance is proportional to X2.

  5. Testing and Confidence Intervals The Model: can be converted to the model

  6. Thus Simultaneous Confidence Intervals (using model (2))

  7. Example: Simple Linear Model with no intercept The model

  8. Thus

  9. Also Special cases

  10. General Linear Model Case 2: Sunknown

  11. The General Linear ModelSunknown

  12. The General Linear ModelSunknown Call this the Ordinary Least Squares (OLS) estimator of Note: Thus the Ordinary Least Squares (OLS) estimator of is always unbiased.

  13. This is the Optimal (UMVU) estimator of Note: This is also an unbiased estimator of The Optimal (UMVU) estimator of requires the knowledge of Sin order to calculate it.

  14. Theorem: Equivalence of OLS estimator with UMVU estimator

  15. Proof

  16. Application:Consider the general linear model with intercept In this case the error terms are equally correlated. Also in this case the OLS estimators are equivalent to the UMVU estimators

  17. Proof

  18. Design Matrix, X, not of full rank

  19. The General Linear Model

  20. If the rank of X is equal to p then the columns of X are linearly independent and there is a unique way of representing If the rank of X is strictly less than p then there is no unique way of representing

  21. Comment: Usually the situation where the rank of X, r < p, arises in the following instances. • The design of the study (the choice of the values of X1, X2, …, Xp) was not careful enough to ensure that X had full rank. • Observations were missing causing the model to be altered Elements of are deleted along with corresponding rows of X, reducing the number of linear independent rows from p to r. • The model was defined in such a way that: • mi = b1xi1 + b2xi2 +… +bpxip • is not uniquely determined by b1, b2,… ,bp.

  22. Two Basic approaches: • Impose p – r linear restrictions on the parameters • This allows us to reduce the number of parameters to r. • will have a unique representation if the p – r restrictions are added. • This technique is usually used with ANOVA, MANOVA, ANACOVA models. • Live with the singularity. • Restrict our attention to linear combinations of the parameters that have unique estimators. • The two approaches are essentially the same (lead to the same conclusions).

  23. Recall: Linear Equations theory Consider the system of linear equations M(A), the linear space spanned by the columns ofA

  24. Then the general solution to the system of linear equations is

  25. Maximum Likelihood Estimation leads to the system of linear equations • p equations in p unknowns • called the Normal equations

  26. Theorem The Normal equations are consistent. Proof It can be shown that M(XX)  M(X) M(X) M(XX) Theorem The general solution to the Normal equations is

  27. Theorem is the same for all solutions of the Normal equations Proof: the general solution to the Normal equations is Since M(XX)  M(X) there exists a p × n matrix L such that X = XXL or X = LXX

  28. Definition: (Estimability) The linear function of the parameter vector, is called estimable if there exists a vector such that Example The simple linear model

  29. Thus is the only estimable function of b0, b1.

  30. Theorem: The following conditions are equivalent 2. For some solution, , of the Normal equation, , is a linear (in ) unbiased estimate of M(XX) M(X)

  31. Proof: Assume Then there exists a vector such that M(X) Thus Thus 1. implies 5. (as well as 4.) Now assume 4.

  32. Thus 4. implies 3. Thus 4. implies 2. and 1.

  33. Suppose we have k normal populations Example: One-way ANOVA (Analysis of Variance) Let yi1, yi2, … , yindenote a sample of n from Let eij= yij - (m + ai), then ei1, ei2, … , eindenotes a sample of n from distribution. where e11, e12, … , eknare kn independent observations from N(0,s2) distribution.

  34. Let Matrix Notation

  35. Let Then the model is

  36. M(X) = then linear space spanned by the vectors

  37. Thus the estimable parameters are of the form: The common approach is to add the restriction This reduces the number of parameters to k, and converts the model to full rank.

  38. Properties of estimable functions: All linear functions are estimable Proof If rank(X) = p then M(X) = Ep =p-dimensional Euclidean space (which contains all p-dimansional vectors) is estimable if Proof (unique for all solutions of the normal equations) Hence is estimable.

  39. If and are estimable then Proof since and are estimable then

More Related