1 / 26

Lecture 6 Generalized Linear Models

Advanced Research Skills. Lecture 6 Generalized Linear Models. Olivier MISSA, om502@york.ac.uk. Outline. Continue exploring options available when assumptions of classical linear models are untenable.

Télécharger la présentation

Lecture 6 Generalized Linear Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Research Skills Lecture 6 Generalized Linear Models Olivier MISSA, om502@york.ac.uk

  2. Outline • Continue exploring options available when assumptions of classical linear models are untenable. • In this lecture: What can we do when observations are not continuous and the residuals are not normally distributed nor identically distributed ?

  3. Classical Linear Models • Defined by three assumptions: • (1) the response variable is continuous. • (2) the residuals (ε) are normally distributed and ... • (3) ... independently (3a) and identically distributed (3b). • Today, we will consider a range of options availablewhen assumptions (1) (2) and/or (3b) are not verified.

  4. Non-continuous response variable • Many situations exist: • The response variable could be • (1) a count (number of individuals in a population) (number of species in a community) • (2) a proportion (proportion "cured" after treatment) (proportion of threatened species) • (3) a categorical variable (breeding/non-breeding) • (different phenotypes) • (4) a strictly positive value (esp. time to success) (or time to failure) • ( ... ) and so forth

  5. Added difficulties • These types of non-continuous variables also tend to deviate from the assumptions of Normality(assumption #2) and Homoscedasticity(assumption #3b) • (1) A count variable often follows a Poisson distribution (where the variance increases linearly with the mean) • (2) A proportion often follows a Binomial distribution(where the variance reaches a maximum for intermediate values and a minimum at either end: 0% or 100%)

  6. Added difficulties • These types of non-continuous variables also tend to deviate from the assumptions of Normality(assumption #2) and Homoscedasticity(assumption #3b). • (3) A categorical variable tends to follow a Binomial distribution (when the variable has only two levels)or a Multinomial distribution (when the variable has more than two levels) • (4) Time to success/failure can follow an exponential distribution or an inverse Gaussian distribution (the latter having a variance increasing more quickly than the mean).

  7. Canonical (location) parameter Dispersion parameter Probability density function (if y is continuous) Probability mass function (if y is discrete) mean variance Fortunately • Many of these situations can be unified under a central framework. • Since all these distributions (and a few more) belong to the exponential family of distributions. Canonical form

  8. The Normal distribution Probability density function  Canonical form  Canonical (location) parameter Dispersion parameter

  9. The Poisson distribution Probability mass function  Canonical form  =1 Canonical (location) parameter Dispersion parameter

  10. The Binomial distribution Probability mass function Canonical form  =1  Canonical (location) parameter Dispersion parameter

  11. Why is that remotely useful ? • 1) A single algorithm (maximum likelihood) will cope with all these situations. • 2) Different types of Variance can be accommodated • When Var is constant -> Normal (Gaussian) • When Var increases linearly with the mean -> Poisson • When Var has a humped back shape -> Binomial • When Var increases as the square of the mean -> Gamma(means the coefficient of variation remains constant) • When Var increases as the cube of the mean -> inverse Gaussian • 3) Most types of data are thus effectively covered

  12. Non-independent Observations • Two ways to cope with non-independent observations • When design is balanced ("equal sample size") • We can use factors to partition our observations in different "groups" and analyse them as an ANOVA or ANCOVA. • We already know how to do that (when factors are "crossed") • We just need to figure out how to cope with nested factors. • When design is unbalanced ("uneven sample size") • Mixed effect models are then called for.

  13. How does it work ? • 1) You need to specify the family of distribution to use • 2) You need to specify the link function linear predictor link function Link Normal Identity Poisson Log Binomial Logit Gamma Inverse Inv.Gaussian Inverse square For each type of variable the "natural" link function to use is indicated by the canonical parameter

  14. Count variable • This type of response variable often follows a Poisson distribution with Variance increasing in direct relation with the Mean. • The family to use is Poisson and the canonical link is log. • Example: What are the environmental variables associated with plant diversity on the Galapagos ? > library(faraway) > data(gala) > names(gala) [1] "Species" "Endemics" "Area" "Elevation" "Nearest" [6] "Scruz" "Adjacent" > attach(gala) Beware some missing data in the original dataset have been filed for convenience. Johnson, M.P. & Raven, P.H. (1973) Science 179(4076): 893-895.

  15. Count variable > summary(gala) Species Endemics Area Min. : 2.00 Min. : 0.00 Min. : 0.0100 1st Qu.: 13.00 1st Qu.: 7.25 1st Qu.: 0.2575 Median : 42.00 Median :18.00 Median : 2.5900 Mean : 85.23 Mean :26.10 Mean : 261.7087 3rd Qu.: 96.00 3rd Qu.:32.25 3rd Qu.: 59.2375 Max. :444.00 Max. :95.00 Max. :4669.3200 Elevation Nearest Scruz Adjacent Min. : 25.00 Min. : 0.20 Min. : 0.00 Min. : 0.03 1st Qu.: 97.75 1st Qu.: 0.80 1st Qu.: 11.03 1st Qu.: 0.52 Median : 192.00 Median : 3.05 Median : 46.65 Median : 2.59 Mean : 368.03 Mean :10.06 Mean : 56.98 Mean : 261.10 3rd Qu.: 435.25 3rd Qu.:10.03 3rd Qu.: 81.08 3rd Qu.: 59.24 Max. :1707.00 Max. :47.40 Max. :290.20 Max. :4669.32 > gala <- gala[,-2] ## removing variable "Endemics" > modp <- glm(Species ~ ., family=poisson, data=gala) by default the link for a Poisson is log

  16. Count variable Only valid if the Response variable is indeed following a Poisson > summary(modp) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.155e+00 5.175e-02 60.963 < 2e-16 *** Area -5.799e-04 2.627e-05 -22.074 < 2e-16 *** Elevation 3.541e-03 8.741e-05 40.507 < 2e-16 *** Nearest 8.826e-03 1.821e-03 4.846 1.26e-06 *** Scruz -5.709e-03 6.256e-04 -9.126 < 2e-16 *** Adjacent -6.630e-04 2.933e-05 -22.608 < 2e-16 *** --- (Dispersion parameter for poisson family taken to be 1) Null deviance: 3510.73 on 29 degrees of freedom Residual deviance: 716.85 on 24 degrees of freedom AIC: 889.68 Number of Fisher Scoring iterations: 5 Need to be broadly similar also called G-statistic

  17. Count variable • This dispersion parameter () must be calculated. Pearson's residuals Residual degrees of freedom > (dp <- sum(residuals(modp, type="pearson")^2)/modp$df.res) [1] 31.74914 Suggests that the Variance is 31.8 times the Mean. In statistical terms this is called Overdispersion. In biological terms, it suggests that the counts are not independent from each other but instead are Aggregated(i.e. Clumped). Typically Overdispersed count data follow a Negative Binomial distribution, which is not part of the Exponential families of distribution. It won't be covered here, but it can be approximated as a quasi-Poisson (family="quasipoisson"). If you need it in your future work, you can also try glm.nb (in MASS package)

  18. Count variable • The summary table can be adjusted with the dispersion parameter These Values can now be taken at face value > summary(modp, dispersion=dp) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.1548079 0.2915897 10.819 < 2e-16 *** Area -0.0005799 0.0001480 -3.918 8.95e-05 *** Elevation 0.0035406 0.0004925 7.189 6.53e-13 *** Nearest 0.0088256 0.0102621 0.860 0.390 Scruz -0.0057094 0.0035251 -1.620 0.105 Adjacent -0.0006630 0.0001653 -4.012 6.01e-05 *** --- (Dispersion parameter for poisson family taken to be 31.74914) Null deviance: 3510.73 on 29 degrees of freedom Residual deviance: 716.85 on 24 degrees of freedom AIC: 889.68

  19. Count variable • The drop1 function can be used to simplify the model > drop1(modp, test="F") When you have overdispersed data Model: Species ~ Area + Elevation + Nearest + Scruz + Adjacent Df Deviance AIC F value Pr(F) <none> 716.85 889.68 Area 1 1204.35 1375.18 16.3217 0.0004762 *** Elevation 1 2389.57 2560.40 56.0028 1.007e-07 *** Nearest 1 739.41 910.24 0.7555 0.3933572 Scruz 1 813.62 984.45 3.2400 0.0844448 . Adjacent 1 1341.45 1512.29 20.9119 0.0001230 *** --- Warning message: In drop1.glm(modp, test = "F") : F test assumes 'quasipoisson' family Safer to use the F-values and their p-values AIC values dodgy when quasipoisson is used "Nearest" should probably be removed from the model.

  20. Count variable > modp2 <- update(modp, ~. - Nearest) > (dp2 <- sum(residuals(modp2, type="pearson")^2)/ modp2$df.res) [1] 29.53501 > summary(modp2, dispersion=dp2) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 3.1599640 0.2805140 11.265 < 2e-16 *** Area -0.0005978 0.0001396 -4.283 1.85e-05 *** Elevation 0.0035769 0.0004675 7.651 1.99e-14 *** Scruz -0.0038565 0.0025216 -1.529 0.126 Adjacent -0.0007030 0.0001521 -4.621 3.82e-06 *** --- (Dispersion parameter for poisson family taken to be 29.53501) Null deviance: 3510.73 on 29 degrees of freedom Residual deviance: 739.41 on 25 degrees of freedom AIC: 910.24

  21. Count variable > drop1(modp2, test="F") Model: Species ~ Area + Elevation + Scruz + Adjacent Df Deviance AIC F value Pr(F) <none> 739.41 910.24 Area 1 1290.08 1458.91 18.6184 0.0002200 *** Elevation 1 2525.09 2693.92 60.3749 3.981e-08 *** Scruz 1 818.74 987.57 2.6822 0.1140018 Adjacent 1 1570.87 1739.70 28.1123 1.709e-05 *** --- Warning message: In drop1.glm(modp, test = "F") : F test assumes 'quasipoisson' family > modp3 <- update(modp2, ~. – Scruz) > (dp3 <- sum(residuals(modp3, type="pearson")^2)/ modp3$df.res) [1] 30.08155

  22. Count variable > summary(modp3, dispersion=dp3) Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 2.9613109 0.2617588 11.313 < 2e-16 *** Area -0.0005704 0.0001381 -4.129 3.64e-05 *** Elevation 0.0035891 0.0004721 7.602 2.91e-14 *** Adjacent -0.0007508 0.0001524 -4.928 8.32e-07 *** --- (Dispersion parameter for poisson family taken to be 30.08155) Null deviance: 3510.73 on 29 degrees of freedom Residual deviance: 818.74 on 26 degrees of freedom AIC: 987.57 How good is the model ? 1 – (Res. Dev. / Null Dev.) = 76.68 %

  23. Count variable Checking the Model Plotting residuals vs fitted values (Several options) by default the Deviance version In the Original Response Scale > plot(residuals(modp3) ~ predict(modp3, type="response"), xlab=expression(hat(mu)), ylab="Deviance residuals")

  24. Count variable Checking the Model Plotting residuals vs fitted values (Several options) both in the linked scale (Log for Poisson) > plot(residuals(modp3) ~ predict(modp3, type="link"), xlab=expression(hat(eta)), ylab="Deviance residuals") Clearest to inspect "Good Spread"

  25. Count variable Checking the Model Plotting residuals vs fitted values (Several options) both in the original response scale > plot(residuals(modp3, type="response") ~ predict(modp3, type="response"), xlab=expression(hat(mu)), ylab="Response residuals") Harder to read

  26. Count variable Checking the Model Do the residuals have the right distribution ? > shapiro.test(residuals(modp3, type="deviance")) Shapiro-Wilk normality test data: residuals(modp3, type = "deviance") W = 0.9811, p-value = 0.854

More Related