html5-img
1 / 18

‘Blue remembered hills’? – A qualitative appreciation of Quantitative Methodology

‘Blue remembered hills’? – A qualitative appreciation of Quantitative Methodology. Ian Shaw PhD Summer School, 6 June 2013. A position.

naava
Télécharger la présentation

‘Blue remembered hills’? – A qualitative appreciation of Quantitative Methodology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ‘Blue remembered hills’? – A qualitative appreciation of Quantitative Methodology Ian Shaw PhD Summer School, 6 June 2013

  2. A position • there are real and significant differences amongst the s/w research community that cannot and should not be dissolved, either into some abstract level of agreement, or into a pragmatic consensus of one or other variety. • I should always assume - unless there is what seems good evidence to the contrary - that those who hold a view I happen not to share do so based on effort to think carefully and authentically about it, and that they have as much right as me to have their voice heard. • social work research cannot be understood if it is sealed off from wider social science. • Past and present

  3. Past and present? • Into my heart an air that killsFrom yon far country blows:What are those blue remembered hills,What spires, what farms are those? • That is the land of lost content,I see it shining plain,The happy highways where I wentAnd cannot come again. A E Housman

  4. Not an overview of the merits of quantitative methods • Though I have done several substantial quantitative studies, I am not a quantitative methodologist. • A qualitative appreciation of quantitative methods • The origins of this presentation – and session.

  5. References • Shaw, I. ‘Serendipity, Misfires and Occasional Patterns: A career in social work research’ in I Shaw (2012)Practice and Research Aldershot: Ashgate Publications. • Shaw, I. (2012) ‘The Positive contributions of quantitative methodology to social work research: a view from the sidelines’Research on Social Work Practice 22 (2): 129-134. • Shaw, I. and Norton, M (2008) ‘Kinds and Quality of Social Work Research’ Br Journal of Social Work 38 (5): 953-970. • Shaw, I. (2008) ‘Merely Experts? Reflections on the History of Social Work, Science and Research’, Research, Policy and Planning 26 (1): 57-65.

  6. Gifts and Borrowings • By ‘‘gifts’’ I have in mind those occasions—by no means universal—when quantitative scholars take positions on the nature and priority of different sources of evidence that permit reciprocity in exchanges between quantitative and qualitative scholars. • By ‘‘borrowings’’ I have in mind ways in which design solutions and fieldwork methods in qualitative research may gain from adopting aspects of the logic of quantitative comparison studies.

  7. Gift – modesty of claims for experimental designs • The research design used in the Kingswood study was signally unsatisfactory on a number of counts. Not only did ethical difficulties inherent in the design contribute to the premature closure of the project, but even if it had been completed as planned, the results would have been of doubtful scientific value since they would have provided a poor basis for any generalisations about effective treatment methods. In addition the research disrupted many aspects of the administration and life in the school and because it took such a long time (and would have taken even longer if it had gone according to plan), it is doubtful that decisions about treatment policy could have been suspended until its completion. (Clarke & Cornish, 1972, p. 19) • C.f Cronbach et al on the notion of evaluation as a summative single stand-alone study in which ‘the program is to “play statue” while the evaluator’s slow film records its picture’ (1980, p.56).

  8. modesty of claims for experimental designs - 2 • Bill Reid – on what we would call RCTs: ‘It is like trying to decide which horse won a race viewed at a bad angle from the grandstand during a cloudburst’ (Reid, 1988). • The question, ‘Does it work?’, is a sceptical question and ‘functions as an exclusionary gatekeeper’ (Bogdan & Taylor, 1994). • a ‘program that appears superior to a rival program in isolation may be inferior when each program is embedded in the regular sequence of school experience’ (Cronbach et al, 1980).

  9. Lee Cronbach • He complains of evaluation research being geared to justification rather than discovery. ‘Investigators...should sniff around the phenomenon and probe unsystematically for a long while before they mount a wrap-up study intended to “establish” what they have perceived” (1986: 102). • He also casts doubt on the value of the notion of replication. ‘A program evaluation is so dependent on its context that replication of it is only a figure of speech’ (1980: 222).

  10. Gift - restraint on the nature of scientific knowledge • Reid again, saying to me and Nick Gould that we ‘might consider some reference to the long-held notion that science is, indeed, as Dewey once put it, “common sense, writ large”’ • …and acknowledging that theoretical assumptions are implicit in scientific practice when saying ‘it may make sense to construe scientific practice as a “perspective” on intervention’, and his willingness ‘to accredit client ideas about measurement, data collection and the like that might not fit conventional research notions’. • Kirk and Reid went as far as to say that this understanding of a scientific practice perspective ‘could be used with advocacy research, even though the practitioner researcher might need to forgo his or her “neutrality” ’ (Kirk & Reid, 2002: 89).

  11. two kinds of concession • acknowledgement to advocacy research • conceding that conventional scientific logic may not be the only player in the game. • Byrne expresses it thus - ‘measurement is a process of interpretation, no less than the processes of interpretation which underpin qualitative research practice’ (Byrne, 2011: 32). • ‘In my opinion, social science is cumulative, not in possessing ever-more-refined answers about fixed questions, but in possessing an ever-richer repertoire of questions’ (Cronbach, 1986, p.91).

  12. Borrowings • Ian Sinclair • qualitative methods are in many ways ‘more adapted to the complexity of the practitioner’s world than the blockbuster RCT’. • quantitative evaluation can assess causality ‘as it actually plays out in a particular setting’ (Miles & Huberman, 1994) • qualitative proxy for control within a natural setting. • E.g. the simulated client – Wasoff and Dobash. Those who evaluate the process of professional practice come face to face with the invisibility of practice. How may we learn the ways in which lawyers, teachers, general medical practitioners, or social workers practice? How would different professionals deal with the same case?

  13. Patton • the creation of qualitative matrices for exploring linkages between process and outcome: Suppose we have been evaluating a juvenile justice program that places delinquent youth in foster homes…. A regularly recurring process theme concerns the importance of “letting kids learn to make their own decisions”. A regularly recurring outcome theme involves “keeping the kids straight” … By crossing the program process (“kids making their own decisions”) with the program outcome (“keeping the kids straight”), we create a data analysis question: What actual decisions do juveniles make that are supposed to lead to reduced recidivism? We then carefully review our field notes and interview quotations looking for data that help us understand how people in the program have answered this question based on their actual behaviors and practices. By describing what decisions juveniles actually make in the program, the decision makers to whom our findings are reported can make their own judgements about the strength or weakness of the linkage…’ (Patton, 2002)

  14. Borrowings 2 - structured methods within ethnography • Systematic Self Observation - Rodriguez and Ryave, (2002) • training informants ‘to observe and record a selected feature of their everyday experience’, so that participants ‘go about their lives while alertly observing’ the matter of interest (p.2). The focus is on understanding the ordinary, in particular the covert, the elusive and the personal. In an effort to overcome the ‘numbness to the details of everyday life’ (p.4) respondents are asked to observe ‘a single, focused phenomenon that is natural to the culture, is readily noticeable, is intermittent...is bounded...and is of short duration’ (p.5) and also to focus on the subjective.

  15. Doing systematic self observation • In observing they are instructed in no way to act differently than usual, to never produce instances nor to judge the propriety of the action – ‘do not judge it, do not slow down, do not speed up, do not change it, do not question it – just observe it’

  16. End note • Nothing said in this presentation supports naïve consensual synthesizing of quantitative and qualitative methods, sometimes apparent in the current fad for mixed methods. But it may enrich both qualitative and quantitative inquiry.

  17. References • Byrne, D (2011). Applying Social Science. Bristol: The Policy Press. • Clarke, R & Cornish, D (1972). The Controlled Trial in Institutional Research. London: HMSO. • Cronbach, L. (1986). Social inquiry by and for Earthlings. In D W Fiske & R A Shweder, Metatheory in Social Science. Chicago, University of Chicago Press. • Cronbach, L., Ambron, S., Dornbusch, S., Hess, R., Hornik, R., Phillips, D., Walker, D., & Weiner, S. (1980) Toward Reform of Program Evaluation. San Francisco: Jossey-Bass. • Kirk, S. A. & Reid, W. J. (2002). Social Science Work: A Critical Appraisal. New York:

  18. References - continued • Patton, M. Q. (2002). Qualitative Research and Evaluation Methods. Thousand Oaks: Sage Publications. • Reid, W. J. (1988). Service Effectiveness and the Social Agency. In R. Patti, J. Poertner & C. Rapp (eds) Managing for Effectiveness in Social Welfare Organizations. New York: Haworth. • Reid, W. J. (2002). ‘In the Land of Paradigms, Method Rules.’ In Qualitative Social Work 1, 291–95. • Reid, W. J. & Zettergren, Pam (1999). A Perspective on Empirical Practice. In I. Shaw & J. Lishman (eds). Evaluation and Social Work Practice,.Pp. 41–62. London: Sage. • Rodriguez, N & Ryave, A (2002). Systematic Self Observation. Thousand Oaks: Sage Publications.

More Related