Download
william greene department of economics stern school of business n.
Skip this Video
Loading SlideShow in 5 Seconds..
Econometric Analysis of Panel Data PowerPoint Presentation
Download Presentation
Econometric Analysis of Panel Data

Econometric Analysis of Panel Data

395 Vues Download Presentation
Télécharger la présentation

Econometric Analysis of Panel Data

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. William Greene Department of Economics Stern School of Business Econometric Analysis of Panel Data

  2. Econometrics • Theoretical foundations • Microeconometrics and macroeconometrics • Behavioral modeling • Statistical foundations: Econometric methods • Mathematical elements: the usual • ‘Model’ building – the econometric model

  3. Estimation Platforms • Model based • Kernels and smoothing methods (nonparametric) • Semiparametric analysis • Parametric analysis • Moments and quantiles (semiparametric) • Likelihood and M- estimators (parametric) • Methodology based (?) • Classical – parametric and semiparametric • Bayesian – strongly parametric

  4. Trends in Econometrics • Small structural models vs. large scale multiple equation models • Non- and semiparametric methods vs. parametric • Robust methods – GMM (paradigm shift? Nobel prize) • Unit roots, cointegration and macroeconometrics • Nonlinear modeling and the role of software • Behavioral and structural modeling vs. “reduced form,” “covariance analysis” • Pervasiveness of an econometrics paradigm • Identification and “causal” effects

  5. Objectives in Model Building • Specification: guided by underlying theory • Modeling framework • Functional forms • Estimation: coefficients, partial effects, model implications • Statistical inference: hypothesis testing • Prediction: individual and aggregate • Model assessment (fit, adequacy) and evaluation • Model extensions • Interdependencies, multiple part models • Heterogeneity • Endogeneity • Exploration: Estimation and inference methods

  6. Regression Basics The “MODEL” • Modeling the conditional mean – Regression Other features of interest • Modeling quantiles • Conditional variances or covariances • Modeling probabilities for discrete choice • Modeling other features of the population

  7. Application: Health Care Usage German Health Care Usage Data, 7,293 Individuals, Varying Numbers of PeriodsData downloaded from Journal of Applied Econometrics Archive. They can be used for regression, count models, binary choice, ordered choice, and bivariate binary choice.  There are altogether 27,326 observations.  The number of observations ranges from 1 to 7.  (Frequencies are: 1=1525, 2=2158, 3=825, 4=926, 5=1051, 6=1000, 7=987).  (Downloaded from the JAE Archive) Variables in the file areDOCTOR = 1(Number of doctor visits > 0) HOSPITAL = 1(Number of hospital visits > 0) HSAT =  health satisfaction, coded 0 (low) - 10 (high)   DOCVIS =  number of doctor visits in last three months HOSPVIS =  number of hospital visits in last calendar yearPUBLIC =  insured in public health insurance = 1; otherwise = 0 ADDON =  insured by add-on insurance = 1; otherswise = 0 HHNINC =  household nominal monthly net income in German marks / 10000. (4 observations with income=0 were dropped)HHKIDS = children under age 16 in the household = 1; otherwise = 0 EDUC =  years of schooling AGE = age in years MARRIED = marital status

  8. Household Income Kernel Density Estimator Histogram

  9. Regression – Income on Education ---------------------------------------------------------------------- Ordinary least squares regression ............ LHS=LOGINC Mean = -.92882 Standard deviation = .47948 Number of observs. = 887 Model size Parameters = 2 Degrees of freedom = 885 Residuals Sum of squares = 183.19359 Standard error of e = .45497 Fit R-squared = .10064 Adjusted R-squared = .09962 Model test F[ 1, 885] (prob) = 99.0(.0000) Diagnostic Log likelihood = -559.06527 Restricted(b=0) = -606.10609 Chi-sq [ 1] (prob) = 94.1(.0000) Info criter. LogAmemiya Prd. Crt. = -1.57279 --------+------------------------------------------------------------- Variable| Coefficient Standard Error b/St.Er. P[|Z|>z] Mean of X --------+------------------------------------------------------------- Constant| -1.71604*** .08057 -21.299 .0000 EDUC| .07176*** .00721 9.951 .0000 10.9707 --------+------------------------------------------------------------- Note: ***, **, * = Significance at 1%, 5%, 10% level. ----------------------------------------------------------------------

  10. Specification and Functional Form ---------------------------------------------------------------------- Ordinary least squares regression ............ LHS=LOGINC Mean = -.92882 Standard deviation = .47948 Number of observs. = 887 Model size Parameters = 3 Degrees of freedom = 884 Residuals Sum of squares = 183.00347 Standard error of e = .45499 Fit R-squared = .10157 Adjusted R-squared = .09954 Model test F[ 2, 884] (prob) = 50.0(.0000) Diagnostic Log likelihood = -558.60477 Restricted(b=0) = -606.10609 Chi-sq [ 2] (prob) = 95.0(.0000) Info criter. LogAmemiya Prd. Crt. = -1.57158 --------+------------------------------------------------------------- Variable| Coefficient Standard Error b/St.Er. P[|Z|>z] Mean of X --------+------------------------------------------------------------- Constant| -1.68303*** .08763 -19.207 .0000 EDUC| .06993*** .00746 9.375 .0000 10.9707 FEMALE| -.03065 .03199 -.958 .3379 .42277 --------+-------------------------------------------------------------

  11. Interesting Partial Effects ---------------------------------------------------------------------- Ordinary least squares regression ............ LHS=LOGINC Mean = -.92882 Standard deviation = .47948 Number of observs. = 887 Model size Parameters = 5 Degrees of freedom = 882 Residuals Sum of squares = 171.87964 Standard error of e = .44145 Fit R-squared = .15618 Adjusted R-squared = .15235 Model test F[ 4, 882] (prob) = 40.8(.0000) Diagnostic Log likelihood = -530.79258 Restricted(b=0) = -606.10609 Chi-sq [ 4] (prob) = 150.6(.0000) Info criter. LogAmemiya Prd. Crt. = -1.62978 --------+------------------------------------------------------------- Variable| Coefficient Standard Error b/St.Er. P[|Z|>z] Mean of X --------+------------------------------------------------------------- Constant| -5.26676*** .56499 -9.322 .0000 EDUC| .06469*** .00730 8.860 .0000 10.9707 FEMALE| -.03683 .03134 -1.175 .2399 .42277 AGE| .15567*** .02297 6.777 .0000 50.4780 AGE2| -.00161*** .00023 -7.014 .0000 2620.79 --------+-------------------------------------------------------------

  12. Function: Log Income | Age Partial Effect wrt Age

  13. A Statistical Relationship • A relationship of interest: • Number of hospital visits: H = 0,1,2,… • Covariates: x1=Age, x2=Sex, x3=Income, x4=Health • Causality and covariation • Theoretical implications of ‘causation’ • Comovement and association • Intervention of omitted or ‘latent’ variables • Temporal relationship – movement of the “causal variable” precedes the effect.

  14. (Endogeneity) • A relationship of interest: • Number of hospital visits: H = 0,1,2,… • Covariates: x1=Age, x2=Sex, x3=Income, x4=Health • Should Health be ‘Endogenous’ in this model? • What do we mean by ‘Endogenous’ • What is an appropriate econometric method of accommodating endogeneity?

  15. Models • Conditional mean function: E[y | x] • Other conditional characteristics – what is ‘the model?’ • Conditional variance function: Var[y | x] • Conditional quantiles, e.g., median [y | x] • Other conditional moments • Conditional probabilities: P(y|x) • What is the sense in which “y varies with x?”

  16. Using the Model • Understanding the relationship: • Estimation of quantities of interest such as elasticities • Prediction of the outcome of interest • Control of the path of the outcome of interest

  17. Application: Doctor Visits • German individual health care data: N=27,236 • Model for number of visits to the doctor: • Poisson regression (fit by maximum likelihood) E[V|Income]=exp(1.412 - .0745  income) • OLS Linear regression: g*(Income)=3.917 - .208  income

  18. Conditional Mean and Linear Projection This area is outside the range of the data Most of the data are in here Notice the problem with the linear projection. Negative predictions.

  19. What About the Linear Projection? • What we do when we linearly regress a variable on a set of variables • Assuming there exists a conditional mean • There usually exists a linear projection. Requires finite variance of y. • Approximation to the conditional mean • If the conditional mean is linear • Linear projection equals the conditional mean

  20. Partial Effects • What did the model tell us? • Covariation and partial effects: How does the y “vary” with the x? • Marginal Effects: Effect on what????? • For continuous variables • For dummy variables • Elasticities: ε(x)=δ(x)  x / E[y|x]

  21. Average Partial Effects • When δ(x) ≠β, APE = Ex[δ(x)]= • Approximation: Is δ(E[x]) = Ex[δ(x)]? (no) • Empirically: Estimated APE = • Empirical approximation: Est.APE = • For the doctor visits model • δ(x)= β exp(α+βx)=-.0745exp(1.412-.0745income) • Sample APE = -.2373 • Approximation = -.2354 • Slope of the linear projection = -.2083 (!)

  22. APE and PE at the Mean • Implication: Computing the APE by averaging over observations (and counting on the LLN and the Slutsky theorem) vs. computing partial effects at the means of the data. • In the earlier example: Sample APE = -.2373 • Approximation = -.2354

  23. The Linear Regression Model • y = X+ε, N observations, K columns in X, including a column of ones. • Standard assumptions about X • Standard assumptions about ε|X • E[ε|X]=0, E[ε]=0 and Cov[ε,x]=0 • Regression? • If E[y|X] = X thenX is the projection of y on X

  24. Estimation of the Parameters • Least squares, LAD, other estimators – we will focus on least squares • Classical vs. Bayesian estimation of  • Properties • Statistical inference: Hypothesis tests • Prediction (not this course)

  25. Properties of Least Squares • Finite sample properties: Unbiased, etc. No longer interested in these. • Asymptotic properties • Consistent? Under what assumptions? • Efficient? • Contemporary work: Often not important • Efficiency within a class: GMM • Asymptotically normal: How is this established? • Robust estimation: To be considered later

  26. Least Squares Summary

  27. Hypothesis Testing • Nested vs. nonnested tests • y=b1x+e vs. y=b1x+b2z+e: Nested • y=bx+e vs. y=cz+u: Not nested • y=bx+e vs. logy=clogx: Not nested • y=bx+e; e ~ Normal vs. e ~ t[.]: Not nested • Fixed vs. random effects: Not nested • Logit vs. probit: Not nested • x is endogenous: Maybe nested. We’ll see … • Parametric restrictions • Linear: R-q = 0,R is JxK, J < K, full row rank • General: r(,q) = 0, r = a vector of J functions, R (,q) = r(,q)/’. • Use r(,q)=0 for linear and nonlinear cases

  28. Example: Panel Data on Spanish Dairy Farms N = 247 farms, T = 6 years (1993-1998)

  29. Application • y = log output • x = Cobb douglas production: x = 1,x1,x2,x3,x4 = constant and logs of 4 inputs (5 terms) • z = Translog terms, x12, x22, etc. and all cross products, x1x2, x1x3, x1x4, x2x3, etc. (10 terms) • w = (x,z) (all 15 terms) • Null hypothesis is Cobb Douglas, alternative is translog = Cobb-Douglas plus second order terms.

  30. Translog Regression Model x H0:z=0

  31. Wald Tests • r(b,q)= close to zero? • Wald distance function: • r(b,q)’{Var[r(b,q)]}-1r(b,q)2[J] • Use the delta method to estimate Var[r(b,q)] • Est.Asy.Var[b]=s2(X’X)-1 • Est.Asy.Var[r(b,q)]= R(b,q){s2(X’X)-1}R’(b,q) • The standard F test is a Wald test; JF =2[J].

  32. Close to 0?

  33. Likelihood Ratio Test • The normality assumption • Does it work ‘approximately?’ • For any regression model yi = h(xi,)+εi where εi ~N[0,2], (linear or nonlinear), at the linear (or nonlinear) least squares estimator, however, computed, with or without restrictions, This forms the basis for likelihood ratio tests.

  34. Score or LM Test: General • Maximum Likelihood (ML) Estimation • A hypothesis test • H0: Restrictions on parameters are true • H1: Restrictions on parameters are not true • Basis for the test: b0 = parameter estimate under H0 (i.e., restricted), b1 = unrestricted • Derivative results: For the likelihood function under H1, • logL1/ | =b1 = 0 (exactly, by definition) • logL1/ | =b0 ≠ 0. Is it close? If so, the restrictions look reasonable

  35. Computing the LM Statistic The derivation on page 60 of Wooldridge’s text is needlessly complex, and the second form of LM is actually incorrect because the first derivatives are not ‘heteroscedasticity robust.’

  36. Application of the Score Test • Linear Model: Y = X+Zδ+ε • Test H0: δ=0 • Restricted estimator is [b’,0’]’ Namelist ; X = a list… ; Z = a list … ; W = X,Z $ Regress ; Lhs = y ; Rhs = X ; Res = e $ Matrix ; list ; LM = e’ W * <W’[e^2]W> * W’ e $

  37. Restricted regression and derivatives for the LM Test

  38. Tests for Omitted Variables ? Cobb - Douglas Model Namelist ; X = One,x1,x2,x3,x4 $ ? Translog second order terms, squares and cross products of logs Namelist ; Z = x11,x22,x33,x44,x12,x13,x14,x23,x24,x34 $ ? Restricted regression. Short. Has only the log terms Regress ; Lhs = yit ; Rhs = X ; Res = e $ Calc ; LoglR = LogL ; RsqR = Rsqrd $ ? LM statistic using basic matrix algebra Namelist ; W = X,Z $ Matrix ; List ; LM = e'W * <W’[e^2]W> * W'e $ ? LR statistic uses the full, long regression with all quadratic terms Regress ; Lhs = yit ; Rhs = W $ Calc ; LoglU = LogL ; RsqU = Rsqrd ; List ; LR = 2*(Logl - LoglR) $ ? Wald Statistic is just J*F for the translog terms Calc ; List ; JF=col(Z)*((RsqU-RsqR)/col(Z)/((1-RsqU)/(n-kreg)) )$

  39. Regression Specifications

  40. Model Selection • Regression models: Fit measure = R2 • Nested models: log likelihood, GMM criterion function (distance function) • Nonnested models, nonlinear models: • Classical • Akaike information criterion= – (logL – 2K)/N • Bayes (Schwartz) information criterion = –(logL-K(logN))/N • Bayesian: Bayes factor = Posterior odds/Prior odds (For noninformative priors, BF=ratio of posteriors)

  41. Remaining to Consider for the Linear Regression Model • Failures of standard assumptions • Heteroscedasticity • Autocorrelation and Spatial Correlation • Robust estimation • Omitted variables • Measurement error

  42. Endogeneity • y = X+ε, • Definition: E[ε|x]≠0 • Why not? • Omitted variables • Unobserved heterogeneity (equivalent to omitted variables) • Measurement error on the RHS (equivalent to omitted variables) • Structural aspects of the model • Endogenous sampling and attrition • Simultaneity (?)

  43. Instrumental Variable Estimation • One “problem” variable – the “last” one • yit = 1x1it + 2x2it + … + KxKit + εit • E[εit|xKit] ≠ 0. (0 for all others) • There exists a variable zit such that • E[xKit| x1it, x2it,…, xK-1,it,zit] = g(x1it, x2it,…, xK-1,it,zit) In the presence of the other variables, zit “explains” xit • E[εit| x1it, x2it,…, xK-1,it,zit] = 0 In the presence of the other variables, zit and εit are uncorrelated. • A projection interpretation: In the projection XKt =θ1x1it,+ θ2x2it + … + θk-1xK-1,it + θK zit, θK ≠ 0.

  44. The First IV Study: Natural Experiment(Snow, J., On the Mode of Communication of Cholera, 1855)http://www.ph.ucla.edu/epi/snow/snowbook3.html • London Cholera epidemic, ca 1853-4 • Cholera = f(Water Purity,u)+ε. • ‘Causal’ effect of water purity on cholera? • Purity=f(cholera prone environment (poor, garbage in streets, rodents, etc.). Regression does not work. Two London water companies Lambeth Southwark Main sewage discharge River Thames Paul Grootendorst: A Review of Instrumental Variables Estimation of Treatment Effects…http://individual.utoronto.ca/grootendorst/pdf/IV_Paper_Sept6_2007.pdf

  45. IV Estimation • Cholera=f(Purity,u)+ε • Z = water company • Cov(Cholera,Z)=δCov(Purity,Z) • Z is randomly mixed in the population (two full sets of pipes) and uncorrelated with behavioral unobservables, u) • Cholera=α+δPurity+u+ε • Purity = Mean+random variation+λu • Cov(Cholera,Z)= δCov(Purity,Z)

  46. Cornwell and Rupert Data Cornwell and Rupert Returns to Schooling Data, 595 Individuals, 7 YearsVariables in the file are EXP = work experienceWKS = weeks workedOCC = occupation, 1 if blue collar, IND = 1 if manufacturing industrySOUTH = 1 if resides in southSMSA = 1 if resides in a city (SMSA)MS = 1 if marriedFEM = 1 if femaleUNION = 1 if wage set by union contractED = years of educationLWAGE = log of wage = dependent variable in regressions These data were analyzed in Cornwell, C. and Rupert, P., "Efficient Estimation with Panel Data: An Empirical Comparison of Instrumental Variable Estimators," Journal of Applied Econometrics, 3, 1988, pp. 149-155.  See Baltagi, page 122 for further analysis.  The data were downloaded from the website for Baltagi's text.

  47. Specification: Quadratic Effect of Experience

  48. The Effect of Education on LWAGE

  49. What Influences LWAGE?