1 / 48

Analyzing Bank Efficiency: Are “too-big-to-fail” Banks Efficient?

Measuring Economic Performance. Analyzing Bank Efficiency: Are “too-big-to-fail” Banks Efficient?. Hulusi Inanoglu Senior Supervisory Financial Analyst / Federal Reserve Board / Market & Liquidity Risk Michael Jacobs, Jr.

rusti
Télécharger la présentation

Analyzing Bank Efficiency: Are “too-big-to-fail” Banks Efficient?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring Economic Performance Analyzing Bank Efficiency: Are “too-big-to-fail” Banks Efficient? Hulusi Inanoglu Senior Supervisory Financial Analyst / Federal Reserve Board / Market & Liquidity Risk Michael Jacobs, Jr. Senior Financial Economist / Office of the Comptroller of the Currency / Risk Analysis Division/ US Department of Treasury Junrong Liu Robin C. Sickles Rice University Rice University Loughborough University, UK March 22nd, 2012

  2. Disclaimer The views expressed herein are those of the authors and do not necessarily represent the views of the Board of Governors of the Federal Reserve System, the U.S. Office of the Comptroller of the Currency or the U.S. Department of the Treasury.

  3. Outline • Introduction and Motivation • Econometric Models for Bank Efficiency • Call Report Data • Estimation Results • Conclusions & Directions for Future Research

  4. Introduction and Motivation

  5. Introduction and Motivation(continued) • Pre-crisis: Bigger is better! • Economies of scale and scope • Better competition at the international level • Post-crisis: Large banks = trouble makers! • Moral Hazard (“too-big-to-fail”) • Systemic risk • 15 years ago, the six largest U.S. banks had assets equal to 17% of US GDP. • Today, the six largest banks have total assets estimated to be in excess of 63%!

  6. Econometric Models for Bank Efficiency:Stochastic Frontier Estimation • Sickles (2005) • Use a set of semiparametric efficient estimators (SPE) and the alternative parametric estimators. • Integrate the findings from the competing studies. • Estimate an average efficiency measure. • Specify technology as Cobb-Douglas and estimate efficiencies using different estimators. • Expand the standard model by adding the control variables: credit risk, market risk and liquidity risk. • Credit risk is proxied as gross charge-off rates (dollar chargeoffs normalized by lending book assets.) • Market risk is proxied by trading return ratios (quarterly average trading revenue normalized by trading assets.) • Liquidity risk proxied for by liquidity ratios (cash balances normalized by the book value of assets.)

  7. Econometric Models for Bank Efficiency: Stochastic Frontier Estimation • yi,t: response variables (measures of bank output), xi,t (β)∈ℝp: p-vector of exogenous covariates (regression coefficients to be estimated), ηi: individual effects(inefficiency), ui,t: disturbance term • Interpret this as a log-linear regression, a transformation of a Cobb-Douglass function: m outputs, Yi,t=exp(yi,t) and m-1 of the Xi,t=exp(xi,t), and n=p-m+1 of the remaining of the X′s as inputs. Then the m-output, n-input deterministic distance function satisfies: • The following model can then be derived:

  8. Econometric Models for Bank Efficiency:Stochastic Frontier Estimation • (ηi,Xi) are assumed to be iid variables having unknown density h(·, ·) on ℝ1+pT, which can be specified using kernel smoothers by the semiparametric efficient (SPE) estimators. • The restriction is imposed that the support of η’s marginal density is bounded either from above or below. • The cases are considered where u & (η,X) are independent and a certain level of dependency amongst them. • The Park, Sickles and Simar (1998; “PSS1”) extension of Park and Simar (1994): regressors xi,t is conditionally independent of unobserved random effects ηi given the set of correlated regressors and ηi depends on zi,t only through long-run movement:

  9. Econometric Models for Bank Efficiency: Stochastic Frontier Estimation (concluded) • Cornwell, Schmidt and Sickles (1990; “CSS’)introduced a set of generalized least squares and instrumental variables estimators for a panel data model with heterogeneity in slopes and intercepts. The model they consider generalizes by specifying the production function as • The within estimators(CSSW) is then derived as: • The GLS estimators(CSSG) is the ordinary least squares applied to the transformation of the matrix formed model above.

  10. Econometric Models for Bank Efficiency: Stochastic Frontier Estimation (concluded) • Battese & Coelli (1992; “BC”) consider a stochastic frontier production function for panel data with an exponential specification of time-varying firm effects: • For a balanced panel, τ(i)={1,2,…,T}and further assume → MLEs

  11. Econometric Models for Bank Efficiency:Quantile Regression • The τth conditional quantile function of the response yit is: • Note that only the effects β(τ) of the covariates X are allowed to depend upon the quantile τ , and η captures specific sources of unobserved heterogeneity not controlled for by covariates • As in Galvao(2009) we restrict the estimates of the individual specific effects to be independent of τ across the quantiles. In most applications, T is relatively small compared to the number of individuals N. Therefore, it might be difficult to estimate a τ –dependent distributional individual effect.

  12. Econometric Models for Bank Efficiency: Quantile Regression (concluded) • To estimate the model above for different quantiles simutaneously, we can solve the following problem • k indexes the K quantiles {τ₁,...,τK}, ρτ(u)≜u(τ-Iu<0) is the piecewise linear quantile loss function (Koenker et al, 1978) and vk are weights that control the influence of the quantiles on the parameter estimates • The choice of the weights are analogous to discretely weighed L-statistics (Mosteller, 1946), a common choice of which is Tukey's trimean (Koenker, 1984)

  13. Call Report Data • Regulatory reports issued by banks to supervisory agencies comprising financial data. • Use panel data from 1990 through 2009 for U.S. commercial banks (80 quarters). • Focus on Top 50 banks.(Due to some missing data, there are 40 banks out of the top 50 in our panel) • Unique dataset • Data is merged on a pro-forma basis (i.e., the other non-surviving bank’s data will be represented as part of the surviving bank going back in time ).

  14. Call Report Data (continued) • The variables used to estimate the Cobb-Douglas stochastic frontier production function are: • RELOAN: Real Estate Loans (“REL”) • CILOAN: Commercial and Industrial Loans (“CIL”) • CONSLOAN: Consumer Loans (“CL”) • PREMFXAST: Premises & Fixed Assets (“PFA”) • NUMEMP: Number of Employees (“NOE”) • PRCHFND: Purchased Funds (“PF”) • NONTRNSACC: Savings Accounts (“SA”) • OTHACC: Certificates of Deposit (“CD”) • TRANSACC: Demand Deposits (“DD”) • The risk proxies are: • CREDIT RISK: Gross Charge-off Ratio (“CR”) • LIQUIDITY RISK: Liquidity Ratio (“LR”) • MARKET RISK: Trading Returns (“MR”)

  15. Call Report Data: Summary

  16. Call Report Data (continued)

  17. Call Report Data (continued)

  18. Call Report Data (continued)

  19. Call Report Data (continued)

  20. Call Report Data (continued)

  21. Call Report Data (continued)

  22. Call Report Data (continued)

  23. Call Report Data (continued)

  24. Call Report Data (continued)

  25. Call Report Data (continued)

  26. Estimation Results • Results are highly statistically significant & signs/magnitudes on coefficient estimates are generally consistent across estimators • Across models the estimates before the output variables have positive signs while those before the input variables have negative signs, consistent with what we were inferring from: • since all the input variables (PFA, NOE, PF, SA, CD and DD) contribute positively to the output • Among them, Saving Accounts(SA) and Certificate of Deposits(CD) compared to other inputs have biggest impact; magnitudes on the estimates for SA are somewhat greater and more varied across models than CD • Magnitudes of the coefficient estimates on Number of Employees (NOE) and Demand Deposits (DD) are similar, but the estimates on NOE across models are greater than DD

  27. Estimation Results (continued)

  28. Estimation Results (continued) • Controls for risk: the risk variables estimates are supposed to be detrimental to banks output and thus have positive signs. • However, the estimates before Credit Risk (CR) have negative signs except in CSSW and CSSG model. This does NOT mean that higher Credit Risk implies higher loans since the magnitude of the estimates are much smaller compared to those of Liquidity Risk (5-10 times smaller) • Further in the case of banks as SR analysis is a core competency, then CR as we are measuring it may not be such a bad thing: can decompose credit risk into default and credit spread / downgrade risk, the latter is more closely related to Market Risk (MR) and Liquidity Risk (LR) and liquidity risk than to charge-offs that we are measuring • So we may argue that in controlling for MR and LR, what is left over is a type of risk that is beneficial in some sense • Note that this falls in line with the policy arguments that banks should be restricted from risky trading activities and stick with traditional lending to a greater extent

  29. Estimation Results (continued) • The estimates before LR are more substantial and varying narrowly compared to MR). This result supports the arguments that banks should be restricted from engaging in highly risk activities to keep an appropriate liquidity ratio. • Figure11 displays the time trend of all the time-variant estimators: • They have decreasing trend over time. Figure 12 displays the average efficiencies of both time-invariant and time-variant estimators.

  30. Estimation Results (continued) • Figure 13 displays the Scale Efficiency estimates using time-invariant estimators. • The Scale Efficiency is derived following Balk(2001): • In order to explore the relationship between the scale efficiency and the bank size, we also calculate the correlation coefficients: • they are 0.9432(FX), 0.9353(RD), 0.9368(HT), 0.9373(PSS1) respectively. This is consistent with what we can read from Figure 13: larger banks have lower scale efficiency levels.

  31. Figure 11 Time trend of time-variant estimators

  32. Figure 12 Average Efficiencies of all estimators

  33. Figure 13 Scale Efficiency estimates

  34. Estimation Results (continued) The conclusion of the stochastic frontier estimation is that the largest surviving banks in spite of growing have reduced their ability to make loans over the last quarter decade, as they took on increasing types of risk, and this is reflected in a decline in efficiency since the early 1990's as implied by the econometric models that allow this to vary temporally. In addition, larger banks has been proved to have lower scale efficiency levels , which is suggested by the time-invariant estimators.

  35. Quantile Regression for Panel Data • Two uses of quantile regression: • Dealing with non-normal distribution of the dependent variable. • The effects of independent variables vary across the levels of the dependent variable. • Quantile regression for panel data: • Roger Koenker (2004): “Quantile regression for longitudinal data” • Marco Geraci and Matteo Bottai (2007): “Quantile regression for longitudinal data using the asymmetric Laplace distribution”

  36. Estimation Results: Quantile Regressionusing the Pooled Data

  37. Estimation Results: Quantile Regressionusing the Pooled Data (continued)

  38. Estimation Results: Quantile Regression under Fixed Effect Framework

  39. Estimation Results: Quantile Regressionunder a Fixed Effect Framework (cont’d.)

  40. Estimation Results: Quantile Regressionunder a Fixed Effect Framework (cont’d.)

  41. Estimation Results – Combined (cont’d.)

  42. Estimation Results – Combined (cont’d.)

  43. Estimation Results (concluded) • While all inputs increase output, the magnitude and pattern of their impact varies across both inputs and quantiles, and given the high statistical significance these differences are material. • SA and CD have larger impact on output for all the quantiles, which is consistent with the results from Stochastic Frontier Analysis. • Among all the inputs, the magnitude of impact of CD is decreasing across quantiles while those of the rest inputs display a flatter pattern. • The efficiency estimate using Quantile Regression is 0.5257 (τ-independent), which is close to the efficiency level estimated from Fixed Effect model. • The Scale Efficiency estimate also displays a similar pattern to those using time-invariant estimators in SFA.

  44. Estimation Results (concluded) • In Quantile Regression, both MR and LR have positive signs thus negative impact on the banks’ output and the magnitudes are of the same level as in SFA. • However, the CR estimates have negative signs using Quantile • Regression and several models in the SFA: as credit risk analysis is a core competency for banks, then credit risk as we are measuring it may not be such a bad thing and may even be good. • Credit risk can be decomposed into default risk and the credit spread /downgrade risk. The latter is closely related to market risk and liquidity risk, whereas the former more closely related to charge-offs that we are measuring. Therefore, in controlling for market risk, what is left over is a type of risk that is beneficial in some sense. This might fall in line with the policy arguments that banks should be restricted from risky trading activities and stick with traditional lending to a greater extent.

  45. Conclusions and Directions for Future Research • The stochastic frontier models show that the largest surviving banks average efficiency is between 0.35 to 0.65. • Efficiency is decreasing over time as the banking industry has grown. • Saving Account and Certificate of Deposit are the most important inputs to banking services, which is implied both by Stochastic Frontier Analysis and Quantile Regression Method. • We observed that measures of risk primarily decreases bank output, and that the Liquidity Risk generally dominates the other risk types in its influences. Panel quantile regression results also indicates this. • Our results highlight the importance of the prudential supervisory role in controlling the level of risk in the banking sector. • One policy implication based on our study is that a better capitalized and somewhat smaller banking system might imply a more efficiently functioning industry.

  46. Conclusions & Directionsfor Future Research (concluded) • Several fruitful avenues of extension for this research program: • We may try to get the weighted average of efficiency based on fixed assets instead of the simple average efficiency • We may pursue alternative data-sets, such as other financial service types of firms (e.g., insurers, brokers) or data from other jurisdictions • We may expand our set of explanatory variables, with alternative controls (e.g., size, leverage), or an expanded set of inputs (e.g., a measure of technological change.) • We may expand our suite of alternative models.

  47. Appendix: Technical Considerations in Stochastic Production Frontier Estimation • Denote the respective exogenous and endogenous variables as (XT,YT)T and ℱ the set of all joint distributions for these • Partition model parameter vector θ into slope coefficients of interest β & nuisance parameters η • Consider distribution P(β₀,η₀)∈ ℱ₀ a sub-space of ℱ a regular parametric sub-model (Ibragimov, 1981) • Let ℓ(x,y,β,η) denote the log-likelihood of observation (xT,yT)T with scores ℓβ(x,y)=((∂ℓ)/(∂β))| (β₀,η₀) & ℓη(x,y)=((∂ℓ)/(∂η))|(β₀,η₀) • The efficient score function: ℓ∗=ℓβ-π(ℓβ|[ℓη]),where [ℓη] denotes the linear span S generated by ℓη and π(ℓ|S) is the vector of L₂ norm projections of each component of ℓ onto the space • Project scores with respect to slopes onto the tangent space of nuisance parameter, thereby purging these projections

  48. Appendix: Technical Considerations in Stochastic Production Frontier Estimation (cont.) • Derive efficient scores by construction orthogonal to information contained in set of nuisance parameters (adaptively estimable) • Even in the absence of knowledge regarding the nuisance parameters are is still efficient (Pagan & Ullah1999) • Estimator βN,T is called semiparametric efficient if asymptotically normal with mean β & variance N⁻¹I⁻¹(P;β) • I(P;β) = E(ℓ∗ℓ∗T) is the information matrix for the semiparametric estimator having asymptotic distribution √(NT)(βN,T-β) → N(0,I⁻¹(P;β)) (Bickel Et Al, 1993)

More Related