140 likes | 285 Vues
Discussion From Republic of Science to Audit Society, Irwin Feller. S. Charlot ASIRPA , Paris , France; June 13, 2012. Outline. New Questions/Issues & What’s at Stake How They’re Answered? Validity of Performance Metrics and Methodological Choice(s ) Econometrics
E N D
Discussion From Republic of Science to Audit Society, Irwin Feller S. Charlot ASIRPA, Paris, France; June 13, 2012
Outline • New Questions/Issues & What’s at Stake How They’re Answered? • Validity of Performance Metrics and Methodological Choice(s) Econometrics • Use, Non-Use and Misuse of Research Assessments
Pre-New Public Management Assessment Paradigm • Republic of Science (M.Polyani) • Peer (Expert) Review • Social Contract
New Public Management Paradigm • Accountability • Deregulation • Competition (among different uses of public funds) • Performance Measurement (for evaluating research uses)
Promises of Research Performance Assessment • Objectives provide useful baseline for assessing performance. • Performance measurement focuses attention on the end objectives of public policy, on what’s happened or happening outside rather than inside the black box. • Well defined objectives and documentation of results facilitate communication with funders, performers, users, and others.
Limitations of Research Performance Measurement • Returns/Impacts to research are uncertain, long-term, and circuitous • Specious precision in selection of measures • Impacts are typically dependent on complementary actions by agents outside of Federal agency control • Limited (public) evidence of contributions to improved decision making • Benefits from “failure” are underestimated • Distortion of Incentives : opportunistic behavior (young researchers to be employed and elder researchers to catch future funds) First comment/issue + Role of creativity/very innovative ideas in science progress (i. e. “scientific revolutions”)
Second comment/question Complementarities between methodologies Econometric modeling needs analytical conceptual modeling of underlying theory to be pertinent Econometric analysis also needs to take into account the policy design, context… to be pertinent • Survey, case studies.. No econometric identification of impacts without these components in the evaluation model
Complementarity second example Benefit -Cost Analysis can be made by econometric model Conduct Technical Analysis Identify Next Best Alternative Estimate Program Costs Estimate Economic Benefits Determine Agency Attribution Estimate Benefits of Economic Return RTI 2010
Microeconometrics of policy evaluation τ + 1 τ τ τ + 1 Issue: Before/After Design Shows changes “related” to policy intervention, but does not adjust for “intervening” factors. (Threats to internal validity) Reframe Analysis: Did policy “cause” change(s) in treatment group different from those observable in a comparison/control group
Third comment/question Econometric enhancements: Non parametric analysis no a priori constraint on relationship between outcome (whatever the outcome chosen) and R&D spending or funding • No knowledge production function a priori Takingintoaccount the effect of non observable characteristics or time varingcharacteristics on outcomes context, context, context
Fourth comment/question“Dominant” U.S. but also European Union Methodologyis Expert Panels Problem of network effects Same issue as peer evaluation and bibliometrics Only issue for « low impacts » (publications..) but not for high impacts???
Is Anyone Listening? My small experience (one evaluation report): no one is listening as a researcher I agreethat “Doing good may not make you happy, but doing wrong will certainly make you unhappy” But for a novice at evaluating policy what are arguments not to stop this kind of intellectual exercise? (except publishing or funding researches) What type of advices? For ASIRPA?