1 / 29

Evaluating Organizational Change: How and Why?

Evaluating Organizational Change: How and Why?. Dr Kate Mackenzie Davey Organizational Psychology Birkbeck, University of London k.mackenzie-davey@bbk.ac.uk. Aims. Examine the arguments for evaluating organizational change Consider the limitations of evaluation

Faraday
Télécharger la présentation

Evaluating Organizational Change: How and Why?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluating Organizational Change: How and Why? Dr Kate Mackenzie Davey Organizational Psychology Birkbeck, University of London k.mackenzie-davey@bbk.ac.uk

  2. Aims • Examine the arguments for evaluating organizational change • Consider the limitations of evaluation • Consider different methods for evaluation • Consider difficulties of evaluation in practice • Consider costs and benefits in practice

  3. Arguments for evaluating organizational change • Sound professional practice • Basis for organizational learning • Central to the development of evidence based practice • Widespread cynicism about fads and fashions • To influence social or governmental policy

  4. Research and evaluation • Research focuses on relations between theory and empirical material (data) • Theory should provide a base for policy decisions • Evidence can illuminate and inform theory • Show what does not work as well as what does • Highlight areas of uncertainty and confusion • Demonstrate the complexity of cause-effect relations • Understand predict control

  5. Pragmatic Evaluation: what matters is what works • Why it works may be unclear • Knowledge increases complexity • Reflexive monitoring of strategy links to OL & KM • Evidence and cultural context • May be self fulfilling • Tendency to seek support for policy • Extent of sound evidence unclear

  6. Why is sound evaluation so rare? • Practice shows that evaluation is an extremely complex, difficult and highly political process in organizations. • Questions may be how many, not what works

  7. Evaluation models • Pre-evaluation • Goal based (Tyler, 1950) • Realistic evaluation (Pawson & Tilley,1997; Sanderson, 2002) • Experimental • Constructivist evaluation (Stake, 1975) • Contingent evaluation (Legge, 1984) • Action learning (Reason & Bradbury, 2001) • A study should be technically sound, administratively convenient and politically defensible. Alec Rodger

  8. 1.1 Pre-evaluation (Goodman & Dean, 1982)The extent to which it is likely that... A has an impact on b • Scenario planning • Evidence based practice • All current evidence thoroughly reviewed and synthesised • Meta-analysis • Systematic literature review • Formative v summative (Scriven, 1967)

  9. 1.2 Pre-evaluation issues • Based on theory and past evidence: not clear it will generalise to the specific case • Formative: influences planning • Argument: to understand a system you must intervene (Lewin)

  10. 2. 1. Goal based evaluation Tyler (1950) • Objectives used to aid planned change • Can help clarify models • Goals from bench marking, theory or pre-evaluation exercises • Predict changes • Measure pre and post intervention • Identify the interventions • Were objectives achieved?

  11. 2.2 Difficulties with Goal based evaluation Who sets the goals? How do you identify the intervention? • Tendency to managerialism (unitarist) • Failure to accommodate value pluralism • Over-commitment to scientific paradigm • What is measured gets done • No recognition of unanticipated effects • Focus on single outcome, not process

  12. 3.1 Realistic evaluation: Conceptual clarity (Pawson & Tilley,1997) • Evidence needs to be based on clear ideas about concepts • Measures may be derived from theory • Examine definitions used elsewhere • Consider specific examples • Ensure all aspects are covered

  13. 3.2 Realistic evaluation Towards a theory: What are you looking for? • Make assumptions and ideas explicit What is your theory of cause and effect? • What are you expecting to change (outcome)? • How are you hoping to achieve this change (mechanism)? • What aspects of the context could be important?

  14. 3.3 Realistic evaluation Context-mechanism-outcome • Context: What environmental aspects may affect the outcome? • What else may influence the outcomes? • What other effects may there be?

  15. 3.4 Realistic evaluation Context-mechanism-outcome • Mechanism: What will you do to bring about this outcome? • How will you intervene (if at all)? • What will you observe? • How would you expect groups to differ? • What mechanisms do you expect to operate?

  16. 3.5 Realistic evaluation Context-mechanism-outcome • Outcome: What effect or outcome do you aim for? • What evidence could show it worked? • How could you measure it?

  17. 4.1 Experimental evaluation: Explain, predict and control by identifying causal relationships • Theory of causality makes predictions about variables eg training increases productivity • Two randomly assigned matched groups: experimental and control • One group experiences intervention, one does not • Measure outcome variable pre-test and post-test (longitudinal) • Analyse for statistically significant differences between the two groups • Outcome linked back to modify theory • The gold standard

  18. 4.2 Difficulties with experimental evaluation in organizations • Difficult to achieve in organizations • Unitarist view • Leaves out unforeseen effects • Problems with continuous change processes • Summative not formative • Generally at best quasi-experimental

  19. 5.1 Constructivist or stakeholder evaluation • Responsive evaluation (Stake, 1975) or Fourth generation evaluation (Guba & Lincoln, 1989) • Constructivist interpretivist hermeneutic methodology • Based on stakeholder claims concerns issues • Stakeholders: agents, beneficiaries, victims

  20. 5.2 Response to an IT implementation(Brown, 1998)

  21. 5.3 Constructivist evaluation issues • No one right answer • Demonstrates complexity of issues • Highlights conflicts of interests • Interesting for academics • Difficult for practitioners to resolve

  22. 6 A Contingent approach to evaluation(Legge, 1984) • Do you want the proposed change programme to be evaluated? (Stakeholders) • What functions do you wish its evaluation to serve? (Stakeholders) • What are the alternative approaches to evaluation? (Researcher) • Which of the alternatives best matches the requirements? (Discussion)

  23. 7. Action research • Identify good practice(Reason & Bradbury, 2001) Action research • Responds to practical issues in organizations • Engages in collaborative relationships • Draws on diverse evidence • Value orientation - humanist • Emergent, developmental

  24. Problems with realist models • Tendency to managerialise • Over-commitment to scientific paradigm • Context stripping, • Over-dependence on measures • Coerciveness: truth as non-negotiable • Failure to accommodate value pluralism • Every act of evaluation is a political act, not tenable to claim it is value free

  25. Problems with Constructionist approach • Evaluation judged by who for whom and in whose interests? • Identify different views, then what? • Who has power? • Leaves decisions open • May lead to ambiguity

  26. Why not evaluate? • Expensive in time and resources • De-motivating for individuals • Contradiction between “scientific” evaluation models and supportive, organization learning models • Individual identification with activity • Difficulties in objectifying and maintaining commitment • External evaluation ‘off the shelf’ inappropriate and unhelpful

  27. Overt Aids decision making Reduce uncertainty Learn Control Covert Rally support/opposition Postpone a decision Evade responsibility Fulfil grant requirements Surveillance Why evaluate?(Legge, 1984)

  28. Conclusion • Evaluation is very expensive, demanding and complex • Evaluation is a political process: need for clarity about why you do it • Good evaluation always carries the risk of exposing failure • Therefore evaluation is an emotional process • Evaluation needs to be acceptable to the organization

  29. Conclusion 2 • Plan and decide which model of evaluation is appropriate • Identify who will carry out the evaluation and for what purpose • Do not overload the evaluation process:judgment or development? • Evaluation can give credibility and enhance learning • Informal evaluation will take place whether you plan it or not

More Related