1 / 76

APPLIED SCIENCES & APPLIED RESEARCH Smno-pdklp-2014

APPLIED SCIENCES & APPLIED RESEARCH Smno-pdklp-2014. . The Harvard School of Engineering and Applied Sciences (SEAS) . SEAS Engineering and Applied Science Disciplines for the 21 st Century (in the wheel) and some of the collaborative areas amongst them (on the outside of the wheel).

hisoki
Télécharger la présentation

APPLIED SCIENCES & APPLIED RESEARCH Smno-pdklp-2014

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. APPLIED SCIENCES & APPLIED RESEARCH Smno-pdklp-2014

  2. . The Harvard School of Engineering and Applied Sciences (SEAS) . SEAS Engineering and Applied Science Disciplines for the 21st Century (in the wheel) and some of the collaborative areas amongst them (on the outside of the wheel). Diunduhdari: http://www.seas.harvard.edu/about-seas/facts-history/seas-today….. 20/9/2012

  3. RISET TERAPAN "The hardest problems of pure and applied science can only be solved by the open collaboration of the world-wide scientific community."  Kenneth G Wilson. Diunduhdari: http://www.longeaton.derbyshire.sch.uk/learning/curriculum_areas/sciences/btec_applied_science….. 20/9/2012

  4. . Scope and Importance of Environmental Studies . Because of environmental studies has been seen to be multidisciplinary in nature so it is considered to be a subject with great scope. Environment age not limited to issues of sanitation and health but it is now concerned with pollution control, biodiversity conservation, waste management and conservation of natural resources.. Diunduhdari: http://brawin.blogspot.com/2011/06/scope-and-importance-of-environmental.html….. 20/9/2012

  5. PENELITIAN: Adalahkegiatan yang dilakukanmenurutkaidahdanmetodeilmiahsecarasistematisuntukmemperolehinformasi, data danketerangan yang berkaitandenganpemahamandan/ataupengujiansuatucabangilmupengetahuandanteknologi. UURI No. 12 tahun 2012: PendidikanTinggi Pasal 1 butir 10. Diunduhdari: ….. 16/9/2012

  6. RUMPUN IPTEK Merupakaqnkumpulansejumlahpohon, cabangdan ranting ilmupengetahuan yang disusunsecarasistematis (UURI No 12 tahun 2012: Pasal 10 (1). Rumpun IPTEK terdiriatas: RumpunIlmu Agama RumpunIlmuHumaniora RumpunIlmuSosial RumpunIlmuAlam RumpunIlmu Formal, RumpunIlmuTerapan. (UURI No 12 tahun 2012: Pasal 10 (2).

  7. . Rumpun Ilmu Terapan.. Rumpunilmuterapanmerupakanrumpun IPTEK yang mengkajidanmendalamiaplikasiilmubagikehidupanmanusiaantara lain: pertanian, arsitekturdanperencanaan, bisnis, pendidikan, teknik, kehutanandanlingkungan, keluargadankonsumen, kesehatan, olahraga, jurnalistik, media masadankomunikasi, hukum, perpustakaandanpermuseuman, militer, administrasipubl;ik, pekerjasosial, dantransportasi. UURI No. 12 tahun 2012. PenjelasanPasal 10 Ayat 2 Huruf f.

  8. . APPLIED RESEARCH. Applied research is a form of systematic inquiry involving the practical application of science. It accesses and uses some part of the research communities' (the academy's) accumulated theories, knowledge, methods, and techniques, for a specific, often state-, business-, or client-driven purpose. Applied research deals with solving practical problems and generally employs empirical methodologies. Because applied research resides in the messy real world, strict research protocols may need to be relaxed. For example, it may be impossible to use a random sample. Thus, transparency in the methodology is crucial. Implications for interpretation of results brought about by relaxing an otherwise strict canon of methodology should also be considered. Diunduhdari: http://en.wikipedia.org/wiki/Applied_research….. 16/9/2012

  9. . Three forms of research. Frascati Manual outlines three forms of research. These are basic research, applied research and experimental development: Basic research is experimental or theoretical work undertaken primarily to acquire new knowledge of the underlying foundation of phenomena and observable facts, without any particular application or use in view. Applied research is also original investigation undertaken in order to acquire new knowledge. It is, however, directed primarily towards a specific practical aim or objective. Experimental development is systematic work, drawing on existing knowledge gained from research and/or practical experience, which is directed to producing new materials, products or devices, to installing new processes, systems and services, or to improving substantially those already produced or installed. . Diunduhdari: http://en.wikipedia.org/wiki/Frascati_Manual….. 16/9/2012

  10. . APPLIED SCIENCE . Applied science is the application of human knowledge to build or design useful things. Examples include testing a theoretical model through the use of formal science or solving a practical problem through the use of natural science. Fields of engineering are closely related to applied sciences. Applied science is important for technology development. Its use in industrial settings is usually referred to as research and development (R&D). Applied science differs from fundamental science, which seeks to describe the most basic objects and forces, having less emphasis on practical applications. Applied science can be like biological science and physical science. Diunduhdari: http://en.wikipedia.org/wiki/Applied_science ….. 16/9/2012

  11. . APPLIED RESEARCH . Applied research refers to scientific study and research that seeks to solve practical problems. Applied research is used to find solutions to everyday problems, cure illness, and develop innovative technologies. Psychologists working in human factors or industrial/organizational fields often do this type of research. Diunduhdari: http://psychology.about.com/od/aindex/g/appres.htm ….. 16/9/2012

  12. . WHAT IS APPLIED RESEARCH?. Applied research is designed to solve practical problems of the modern world, rather than to aqcquire knowledge for knowledge's sake. One might say that the goal of the applied scientist is to improve the human condition . Misalnya, riset-terapanmengkajicara-carauntuk: Memperbaikiproduktivitaspertanian Memperlakukanataumerawatpenyakitkhusus Memperbaikioefisiensienergidirumah, kantoratau mode transportasi Some scientists feel that the time has come for a shift in emphasis away from purely basic research and toward applied science. This trend, they feel, is necessitated by the problems resulting from global overpopulation, pollution, and the overuse of the earth's natural resources.. Diunduhdari: http://www.lbl.gov/Education/ELSI/research-main.html ….. 16/9/2012

  13. . APPLIED RESEARCH . Neuman (2000) defines applied research as "research that attempts to solve a concrete problem or address a specific policy question and that has a direct, practical application“. Beberapa contoh riset terapan: Action research, Pendugaan dampak sosial, dan Riset Evaluasi. Diunduhdari: ….. 16/9/2012

  14. RISET TERAPAN • Menjawab pertanyaan-pertanyaan praktis, spesifik lokasi-waktu • Dapat bersifat eksploratori atau deskriptif • Melibatkan pengukuran yang akurat dan mendeskripsikan hubungan antar peubah-peubah dari fenomena yang dipelajari

  15. RISET TERAPAN • Dapat dilakukan oleh institusi akademik atau institusi industri-bisnis • Penelitian diarahkan “untuk menemukan pengetahuan ilmiah baru yang mempunyai tujuan komersial khusus dalam konteks produk, proses atau layanan-jasa”.

  16. RISET TERAPAN • Contoh-contoh pertanyaam penelitian terapan: • Bagaimana tanaman padi di Indonesia dapat dilindungi dari gangguan hama wereng? • Vaksin apa yang paling efektif dan efisien dalam melawan influenza? • Bagaimana kebun apel di Batu dapat dilindungi dari dampak perubahan iklim global?

  17. .EVALUATION RESEARCH. Evaluasimerupakanbidang-metodologi yang berhubunganeratdenganriset-risetsosial , tetapimasihdapatdibedakan. Evaluation utilizes many of the same methodologies used in traditional social research, but because evaluation takes place within a political and organizational context, it requires group skills, management ability, political dexterity, sensitivity to multiple stakeholders and other skills that social research in general does not rely on as much. Diunduhdari: http://www.socialresearchmethods.net/kb/intreval.php ….. 16/9/2012

  18. . Definitions of Evaluation. Definisi yang paling seringdigunakan : Evaluasi = pendugaansistematisterhadapkebaikanataukeburukansuatyuobyek This definition is hardly perfect. There are many types of evaluations that do not necessarily result in an assessment of worth or merit -- descriptive studies, implementation analyses, and formative evaluations, to name a few. Better perhaps is a definition that emphasizes the information-processing and feedback functions of evaluation: Evaluation = akuisisidan assessment informasiuntukmemberikanumpanbalik yang manfaattentangobyek-obyektertentu.

  19. . The Goals of Evaluation. The generic goal of most evaluations is to provide "useful feedback" to a variety of audiences including sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived as "useful" if it aids in decision-making. But the relationship between an evaluation and its impact is not a simple one -- studies that seem critical sometimes fail to influence short-term decisions, and studies that initially seem to have no influence can have a delayed impact when more congenial conditions arise. Despite this, there is broad consensus that the major goal of evaluation should be to influence decision-making or policy formulation through the provision of empirically-driven feedback.

  20. . Evaluation Strategies. Four major groups of evaluation strategies are : Scientific-experimental modelsare probably the most historically dominant evaluation strategies. Taking their values and methods from the sciences -- especially the social sciences -- they prioritize on the desirability of impartiality, accuracy, objectivity and the validity of the information generated. Model-model yang termasukdalam “scientific-experimental” : Experimental dan quasi-experimental designs; Objectives-based research that comes from education; Econometrically-oriented perspectives including cost-effectiveness and cost-benefit analysis; and The theory-driven evaluation. Diunduhdari: ….. 16/9/2012

  21. . Evaluation Strategies. The management-oriented systems models. Two of the most common of these are PERT, the Program Evaluation and Review Technique, and CPM, the Critical Path Method. Two management-oriented systems models were originated by evaluators: Model UTOS : where U stands for Units, T for Treatments, O for Observing Observations and S for Settings; and the CIPP model where the C stands for Context, the I for Input, the first P for Process and the second P for Product. These management-oriented systems models emphasize comprehensiveness in evaluation, placing evaluation within a larger framework of organizational activities. Diunduhdari: ….. 16/9/2012

  22. . Evaluation Strategies. The qualitative/anthropological models. They emphasize the importance of observation, the need to retain the phenomenological quality of the evaluation context, and the value of subjective human interpretation in the evaluation process. Included in this category are : the approaches known in evaluation as naturalistic or 'Fourth Generation' evaluation; the various qualitative schools; critical theory and art criticism approaches; and, the 'grounded theory' approach. Diunduhdari: ….. 16/9/2012

  23. . Evaluation Strategies. The participant-oriented models. As the term suggests, they emphasize the central importance of the evaluation participants, especially clients and users of the program or technology. Client-centered and stakeholder approaches are examples of participant-oriented models, as are consumer-oriented evaluation systems. Diunduhdari: ….. 16/9/2012

  24. . Types of Evaluation. Formative evaluation types: Needs assessment determines who needs the program, how great the need is, and what might work to meet the need Evaluability assessment determines whether an evaluation is feasible and how stakeholders can help shape its usefulness Structured conceptualization helps stakeholders define the program or technology, the target population, and the possible outcomes Implementation evaluation monitors the fidelity of the program or technology delivery Process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures Diunduhdari: ….. 16/9/2012

  25. . Types of Evaluation. Summative evaluation : Outcome evaluations investigate whether the program or technology caused demonstrable effects on specifically defined target outcomes Impact evaluation is broader and assesses the overall or net effects -- intended or unintended -- of the program or technology as a whole Cost-effectiveness and cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs and values Secondary analysis reexamines existing data to address new questions or use methods not previously employed Meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall or summary judgement on an evaluation question . Diunduhdari: ….. 16/9/2012

  26. . Evaluation Questions and Methods. In FORMATIVE RESEARCH the major questions and methodologies are: What is the definition and scope of the problem or issue, or what's the question? Formulating and conceptualizing methods might be used including brainstorming, focus groups, nominal group techniques, Delphi methods, brainwriting, stakeholder analysis, synectics, lateral thinking, input-output analysis, and concept mapping. Where is the problem and how big or serious is it? The most common method used here is "needs assessment" which can include: analysis of existing data sources, and the use of sample surveys, interviews of constituent populations, qualitative research, expert testimony, and focus groups. Diunduhdari: ….. 16/9/2012

  27. . Evaluation Questions and Methods. In FORMATIVE RESEARCH the major questions and methodologies are: How should the program or technology be delivered to address the problem? Some of the methods already listed apply here, as do detailing methodologies like simulation techniques, or multivariate methods like multiattribute utility theory or exploratory causal modeling; decision-making methods; and project planning and implementation methods like flow charting, PERT/CPM, and project scheduling. How well is the program or technology delivered? Qualitative and quantitative monitoring techniques, the use of management information systems, and implementation assessment would be appropriate methodologies here. . Diunduhdari: ….. 16/9/2012

  28. The questions and methods under SUMMATIVE EVALUATION : What type of evaluation is feasible? Evaluability assessment can be used here, as well as standard approaches for selecting an appropriate evaluation design. What was the effectiveness of the program or technology? One would choose from observational and correlational methods for demonstrating whether desired effects occurred, and quasi-experimental and experimental designs for determining whether observed effects can reasonably be attributed to the intervention and not to other sources. What is the net impact of the program? Econometric methods for assessing cost effectiveness and cost/benefits would apply here, along with qualitative methods that enable us to summarize the full range of intended and unintended impacts. Diunduhdari: ….. 16/9/2012

  29. . The Planning-Evaluation Cycle. The planning process could involve any or all of these stages: the formulation of the problem, issue, or concern; the broad conceptualization of the major alternatives that might be considered; the detailing of these alternatives and their potential implications; the evaluation of the alternatives and the selection of the best one; and the implementation of the selected alternative.. Diunduhdari: ….. 16/9/2012

  30. . External Validity – Evaluation Research. External validity is related to generalizing. Validity refers to the approximate truth of propositions, inferences, or conclusions. External validity refers to the approximate truth of conclusions the involve generalizations. External validity is the degree to which the conclusions in your study would hold for other persons in other places and at other times. Diunduhdari: ….. 16/9/2012

  31. . Improving External Validity. How can we improve external validity? The sampling model, suggests that you do a good job of drawing a sample from a population. For instance, you should use random selection, if possible, rather than a nonrandom procedure. And, once selected, you should try to assure that the respondents participate in your study and that you keep your dropout rates low. Use the theory of proximal similarity more effectively. How? Perhaps you could do a better job of describing the ways your contexts and others differ, providing lots of data about the degree of similarity between various groups of people, places, and even times. You might even be able to map out the degree of proximal similarity among various contexts with a methodology like concept mapping. Perhaps the best approach to criticisms of generalizations is simply to show them that they're wrong -- do your study in a variety of places, with different people and at different times. The external validity (ability to generalize) will be stronger the more you replicate your study. Diunduhdari: ….. 16/9/2012

  32. . THE PROXIMAL SIMILARITY MODEL . . 'Proximal' means 'nearby' and 'similarity' means 'similarity'. The term proximal similarity was suggested by Donald T. Campbell as an appropriate relabeling of the term external validity. Under this model, we begin by thinking about different generalizability contexts and developing a theory about which contexts are more like our study and which are less so. For instance, we might imagine several settings that have people who are more similar to the people in our study or people who are less similar.. Diunduhdari: ….. 16/9/2012

  33. . Sampling Model. . In the sampling model, you start by identifying the population you would like to generalize to. Then, you draw a fair sample from that population and conduct your research with the sample. Finally, because the sample is representative of the population, you can automatically generalize your results back to the population. There are several problems with this approach. Perhaps you don't know at the time of your study who you might ultimately like to generalize to. You may not be easily able to draw a fair or representative sample. It's impossible to sample across all times that you might like to generalize to (like next year). Diunduhdari: http://www.socialresearchmethods.net/kb/external.php ….. 16/9/2012

  34. . Measurement. Measurement is the process observing and recording the observations that are collected as part of a research effort. You have to understand the fundamental ideas involved in measuring. Here we consider two of major measurement concepts. Levels of Measurement, explain the meaning of the four major levels of measurement: nominal, ordinal, interval and ratio. Then we move on to the reliability of measurement, including consideration of true score theory and a variety of reliability estimators. You have to understand the different types of measures that you might use in social research. We consider four broad categories of measurements: Survey research includes the design and implementation of interviews and questionnaires. Scaling involves consideration of the major methods of developing and implementing a scale. Qualitative research provides an overview of the broad range of non-numerical measurement approaches. Unobtrusive measures presents a variety of measurement methods that don't intrude on or interfere with the context of the research. Diunduhdari: ….. 16/9/2012

  35. . Survey Research. Survey research is one of the most important areas of measurement in applied social research. The broad area of survey research encompasses any measurement procedures that involve asking questions of respondents. A "survey" can be anything form a short paper-and-pencil feedback form to an intensive one-on-one in-depth interview. Types of surveys are divided into two broad areas: Questionnaires and Interviews. Diunduhdari: ….. 16/9/2012

  36. Interviews Interviews are a far more personal form of research than questionnaires. In the personal interview, the interviewer works directly with the respondent. Unlike with mail surveys, the interviewer has the opportunity to probe or ask follow-up questions. Interviews are generally easier for the respondent, especially if what is sought is opinions or impressions. Interviews can be very time consuming and they are resource intensive. The interviewer is considered a part of the measurement instrument and interviewers have to be well trained in how to respond to any contingency. Diunduhdari: ….. 16/9/2012

  37. .IMPACT ASSESSMENT RESEARCH. The Impact Assessment Research Centre (IARC) at the University of Manchester aims to promote knowledge and practice of Impact Assessment. The increasing interest in evidence-based policy-making has raised new challenges and debates among impact assessment researchers and practitioners. By encouraging an integrated approach to impact assessment, the IARC seeks to strengthen the linkages between different impact assessment methodologies and practices. The work of IARC is multidisciplinary, and recognises that sustainable development can only be achieved on the basis of a balanced, context and time specific assessment of the economic, social and environmental impacts of policies, programmes and projects. Diunduhdari: http://www.sed.manchester.ac.uk/research/iarc/ ….. 16/9/2012

  38. The Impact Assessment Research Centre (IARC) The Impact Assessment Research Centre (IARC) in IDPM specialises in the integrated assessment of the economic, social and environmental impacts on sustainable development of national, regional and international policies. Its current research programme includes sustainability impact assessment (SIA) of global and regional trade agreements, the effectiveness of national sustainable development strategies (NSDS), and regulatory impact assessment (RIA) of draft legislation and other policy measures.. Diunduhdari: pdf.usaid.gov/pdf_docs/PNADN201.pdf….. 16/9/2012

  39. . COMMON PROBLEMS IN IMPACT ASSESSMENT RESEARCH. Introduction Doing an impact assessment of a private sector development (PSD) program is inherently challenging. Doing it the “right” way—such that it satisfies minimally acceptable methodological standards—is more challenging yet. During the course of planning and implementing an impact assessment, it is not uncommon for researchers to confront any number of problems that have serious implications for impact assessment methodology and, consequently, the validity of its findings. The impact assessment problems discussed in this paper include: timing, spillover effects, selection bias, capability of local research partners, and unanticipated external factors, such as climatic disasters. Diunduhdari: pdf.usaid.gov/pdf_docs/PNADN201.pdf….. 16/9/2012

  40. . COMMON PROBLEMS IN IMPACT ASSESSMENT RESEARCH. Timing The timing of the impact assessment may seriously affect the validity of its findings. Ideally, a set aside for an impact assessment is incorporated into the original program budget that includes funding for a technical expert to set up the impact assessment early in the program cycle. More commonly, however, the decision to do an impact assessment occurs after the program is already underway. This can cause a number of problems. To begin with, the baseline may come too late to capture impacts that have already occurred, resulting in an understatement of actual program impacts. The longer the time lag between program launch and the baseline research, the greater the probability that the impact assessment fails to capture certain program impacts. Even more striking examples of the problems resulting from delaying the start of research are provided by cases in which the impact assessment is done either near the end or after the end of a program. In these cases, there is no possibility of doing a baseline study, or, indeed, of getting any longitudinal data. Everything depends on a one-time set of research activities and often entails a heavy reliance on retrospective questions. Diunduhdari: pdf.usaid.gov/pdf_docs/PNADN201.pdf….. 16/9/2012

  41. . COMMON PROBLEMS IN IMPACT ASSESSMENT RESEARCH. Spillover Effects A second common problem occurs when program benefits spill over to non-program participants. An example is the recently completed impact assessment of the Cluster Access to Business Services (CABS) program in Azerbaijan, which seeks to “improve profitability for clusters of rural poor and women micro-entrepreneurs by increasing access to a network of trained veterinary and production advice service providers . . . .” The baseline study, conducted a year after program launch, showed significantly higher net profits for the treatment group veterinarians, but this difference had disappeared by the time the follow up study took place. Diunduhdari: pdf.usaid.gov/pdf_docs/PNADN201.pdf….. 16/9/2012

  42. . COMMON PROBLEMS IN IMPACT ASSESSMENT RESEARCH. Selection Bias One of the greatest challenges in doing a high quality impact assessment is identifying statistically valid treatment and control groups. The best method of group selection is the experimental method in which membership in the treatment and control groups is determined via random assignment. Where experimental methods are not feasible, quasi-experimental methods are a second-best alternative. Diunduhdari: pdf.usaid.gov/pdf_docs/PNADN201.pdf….. 16/9/2012

  43. . COMMON PROBLEMS IN IMPACT ASSESSMENT RESEARCH. Ensuring Good Performance by the Local Research Partner Although it is not often emphasized, selecting the local research partner is one of the most important steps in the impact assessment process. Most developing countries have a variety of consulting firms, marketing research firms, research institutes, or universities with experience in local field research. The capabilities of these local researchers, however, can vary considerably. A bad selection can result in higher costs; missed deadlines; greater frustration; poorer quality of work; strained relations with the program, program partners, and donors; questionable results; and, in extreme cases, failure of the research. Diunduhdari: pdf.usaid.gov/pdf_docs/PNADN201.pdf….. 16/9/2012

  44. . COMMON PROBLEMS IN IMPACT ASSESSMENT RESEARCH. Unanticipated External Events Even if an impact assessment is well planned, the methodology is sound, and the local research partner is competent, it may encounter outside-project factors that threaten or even wipe out the entire study. One example is the impact assessment undertaken of the craft exporter project in Guatemala. In this case, the baseline research was successfully completed in 2003. The baseline survey included a sample of 1,529 producers of textile, ceramic, wood and leather goods of which 314 were affiliated with the project. The analysis in 2006, however, was based on 56 affiliated producers and 105 non-affiliated textile producers who did not present the same demographic profile as the original textile producers.. Diunduhdari: pdf.usaid.gov/pdf_docs/PNADN201.pdf….. 16/9/2012

  45. . Social Impact Assessment tools and methods . Analytical tools STAKEHOLDER ANALYSIS is an entry point to SIA and participatory work. It addresses strategic questions, e.g. who are the key stakeholders? what are their interests in the project or policy? what are the power differentials between them? what relative influence do they have on the operation? This information helps to identify institutions and relations which, if ignored, can have negative influence on proposals or, if considered, can be built upon to strengthen them. GENDER ANALYSIS focuses on understanding and documenting the differences in gender roles, activities, needs and opportunities in a given context. It highlights the different roles and behaviour of men and women. These attributes vary across cultures, class, ethnicity, income, education, and time; and so gender analysis does not treat women as a homogeneous group. SECONDARY DATA REVIEW of information from previously conducted work is an inexpensive, easy way to narrow the focus of a social assessment, to identify experts and institutions that are familiar with the development context, and to establish a relevant framework and key social variables in advance. Diunduhdari: http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

  46. . Social Impact Assessment tools and methods . Community-based methods Participatory Rural Appraisal (PRA) covers a family of participatory approaches and methods, which emphasises local knowledge and action. It uses to group animation and exercises to facilitate stakeholders to share information and make their own appraisals and plans. Originally developed for use in rural areas, PRA has been employed successfully in a variety of settings to enable local people to work together to plan community-appropriate developments. SARAR is an acronym of five attributes -- self-esteem, associative strength, resourcefulness, action planning and responsibility for follow-through -- that are important for achieving a participatory approach to development. SARAR is a philosophy of adult education and empowerment, which seeks to optimise people's ability to self-organize, take initiatives, and shoulder responsibilities. It is best classed as an experiential methodology, which involves setting aside hierarchical differences, team building through training, and learning from local experience rather than from external experts.. Diunduhdari: http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

  47. . Social Impact Assessment tools and methods . Consultation methods Beneficiary Assessment (BA) is a systematic investigation of the perceptions of a sample of beneficiaries and other stakeholders to ensure that their concerns are heard and incorporated into project and policy formulation. The purposes are to (a) undertake systematic listening, which "gives voice" to poor and other hard-to-reach beneficiaries, highlighting constraints to beneficiary participation, and (b) obtain feedback on interventions. Diunduhdari: http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

  48. . Social Impact Assessment tools and methods . Observation and interview tools Participant Observation is a field technique used by anthropologists and sociologists to collect qualitative data and to develop in-depth understanding of peoples' motivations and attitudes. It is based on looking, listening, asking questions and keeping detailed field notes. Observation and analysis are supplemented by desk reviews of secondary sources, and hypotheses about local reality are checked with key local informants. Semi-structured Interviews are a low-cost, rapid method for gathering information from individuals or small groups. Interviews are partially structured by a written guide to ensure that they are focused on the issue at hand, but stay conversational enough to allow participants to introduce and discuss aspects that they consider to be relevant. Focus Group Meetings are a rapid way to collect comparative data from a variety of stakeholders. They are brief meetings -- usually one to two hours -- with many potential uses, e.g. to address a particular concern; to build community consensus about implementation plans; to cross-check information with a large number of people; or to obtain reactions to hypothetical or intended actions. Village Meetings allow local people to describe problems and outline their priorities and aspirations. They can be used to initiate collaborative planning, and to periodically share and verify information gathered from small groups or individuals by other means. Diunduhdari: http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

  49. . Social Impact Assessment tools and methods . Participatory methods Role Playing helps people to be creative, open their perspectives, understand the choices that another person might face, and make choices free from their usual responsibilities. This exercise can stimulate discussion, improve communication, and promote collaboration at both community and agency levels. Wealth Ranking (also known as well-being ranking or vulnerability analysis) is a visual technique to engage local people in the rapid data collection and analysis of social stratification in a community (regardless of language and literacy barriers). It focuses on the factors which constitute wealth, such as ownership of or right to use productive assets, their relationship to locally powerful people, labour and indebtedness, and so on. Access to Resources is a tool to collect information and raise awareness of how access to resources varies according to gender, age, marital status, parentage, and so on. This information can make all the difference to the success or failure of a proposal; for example, if health clinics require users to pay cash fees, and women are primarily responsible for accompanying sick or pregnant family members to the clinic, then women must have access to cash. Diunduhdari: http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

  50. . Social Impact Assessment tools and methods . Participatory methods Analysis of Tasks clarifies the distribution of domestic and community activities by gender and the degree of role flexibility that is associated with each task. This is central to understanding the human resources that are necessary for running a community. Mapping is an inexpensive tool for gathering both descriptive and diagnostic information. Mapping exercises are useful for collecting baseline data on a number of indicators as part of a beneficiary assessment or rapid appraisals, and can lay the foundation for community ownership of development planning by including different groups. Diunduhdari: http://www.unep.ch/etu/publications/EIA_2ed/EIA_E_top13_hd1.PDF ….. 16/9/2012

More Related