1 / 25

Research Methods Measurement Instruments Project

Research Methods Measurement Instruments Project. BIM2313 Lesson 9. Identifying Appropriate Measurement Instruments. Sometimes we will be able to use one or more existing measurement instruments OR have to develop our own instruments Measurement instruments provide a basis on

jimbo
Télécharger la présentation

Research Methods Measurement Instruments Project

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research MethodsMeasurement InstrumentsProject BIM2313 Lesson 9 BIM2313 Research Methods Zainudin Johari

  2. Identifying Appropriate Measurement Instruments • Sometimes we will be able to use one or more existing measurement instruments OR have to develop our own instruments • Measurement instruments provide a basis on which the entire research effort rests • Describe any instrument used in explicit, concrete terms • Provide evidence that the instruments you use have a reasonable degree of validity and reliability for your purposes • Validity = the extent to which the instrument measures what it is actually intended to measure • Reliability = the extent to which it yields consistent results when the characteristic being measured hasn’t changed

  3. Determining the Validity of Measurement Instruments Face validity • The extent to which, on the surface, an instrument looks like it is measuring a particular characteristic • Useful for ensuring the cooperation of people who are participating in a research study • Since it relies entirely on subjective judgment, it is not, in and of itself, terribly convincing evidence that an instrument is truly measuring what the researcher wants to measure

  4. Determining the Validity of Measurement Instruments Content validity • The extent to which a measurement instrument is a representative sample of the content area (domain) being measured • Used to assess people’s achievement in some area – for instance, the knowledge they have learned during classroom instruction or the job skills they have acquired in a rehabilitation program • High content validity means that its items or questions reflect the various parts of the content domain in appropriate proportions and requires the particular behaviours and skills that are central to that domain

  5. Determining the Validity of Measurement Instruments Criterion validity • The extent to which the results of an assessment instrument correlate with another, presumably related measure (the criterion) • Example, a personality test designed to assess a person’s shyness or outgoingness (introvert or extrovert) has criterion validity if its scores correlate with other tests of introversion versus extroversion • An instrument designed to measure a salesperson’s effectiveness on the job should correlate with the number of sales the individual actually makes during the course of a business day

  6. Determining the Validity of Measurement Instruments Construct validity • The extent to which an instrument measures a characteristic that cannot be directly observed but must instead be inferred from patterns in people’s behaviour (such a characteristic is a construct) • Motivation, creativity, racial bias, bedside manner – all of these are constructs, in that none of them can be directly observed and measured • Sometimes there is universal agreement that a particular instrument provides a valid instrument for measuring a particular characteristic • We could all agree that a ruler measures length, a thermometer measures temperature, and a barometer measures air pressure • But whenever we do not have such universal agreement, we must provide evidence that an instrument we are using has validity for our purpose

  7. Determining the Validity of Measurement Instruments • To demonstrate that the measurement instruments used by researchers have validity for their purposes, the following examples are used: - A multitrait-multimethod approach Two or more different characteristics are each measured using two or more different approaches. The different measures of the same characteristic should be highly correlated. The same ways of measuring different characteristics should not be highly correlated - A table of specifications To construct a measurement instrument that provides a representative sample of a particular content domain – in other words, to establish content validity – the researcher often constructs a two-dimensional grid (table of specifications) listing the specific topics and behaviours that reflect achievement in the domain. In each cell of the grid, the researcher indicates the relative importance of each topic-behaviour combination. He or she then develops a series of tasks or test items that reflects the various topics and behaviours in appropriate proportions - Judgment by a panel of experts Several experts in a particular area are asked to scrutinize an instrument to ascertain its validity for measuring the characteristic in question Although none of the approaches just described guarantees the validity of a measurement instrument, each one increases the likelihood of such validity.

  8. Determining the Reliability of Measurement Instruments • Interrater reliability – the extent to which two or more individuals evaluating the same product or performance give identical judgments • Internal consistency reliability – the extent to which all the items within a single instrument yield similar results • Equivalent forms reliability – the extent to which two different versions of the same instrument (e.g., “Form A” and “Form B” of a scholastic aptitude test) yield similar results • Test-retest reliability – the extent to which the same instrument yields the same result on two different occasions

  9. Reliability and Validity • A researcher can enhance the reliability of a measurement instrument in several ways. 1) The instrument should always be administered in a consistent fashion; in other words, there should be standardisation in use of the instrument from one situation or person to the next 2) To the extent that subjective judgments are required, specific criteria should be established that dictate the kinds of judgments the researcher makes 3) Any research assistants who are using the instrument should be well trained so that they obtain similar results

  10. Reliability and Validity • We can measure something accurately only when we can also measure it consistently • In other words, in order to have validity, we must also have reliability • The more valid and reliable our measurement instruments are, the more likely we are to draw appropriate conclusions from the data we collect and, thus, to solve our research problem in a credible fashion

  11. Linking Data and Research Methodology • Data are like ore – contain pieces of the truth but are in a rather unrefined state • To extract meaning from the data, we employ what is commonly called research methodology (RM) • Data and methodology are inextricably interdependent • The RM to be used for a particular research problem must always take into account the nature of the data that will be collected in the resolution of the problem • RM is merely an operational framework within which the facts are placed so that their meaning may be seen more clearly

  12. Data and RM • Different questions yield different types of information • Different research problems lead to different research designs and methods, which in turn result in the collection of different types of data and different interpretations of those data • Many kinds of data may be suitable only for a particular methodology

  13. Data and RM • To some extent, the data dictate the research method • Historical data from written records of past event - cannot extract meaning by laboratory experiment. An experiment is simply not suited to the nature of the data • Not only a true experiment constitutes a “research”

  14. Nature of Data • Two types, writings and observations • Written records and accounts – of past happenings • Observation for whose transmission description is the best vehicle - at the scene of occurrence • Observation that are quantified and exist in the form of numerical concepts – in the language of mathematics • Observation of certain differences and likeness that arise from comparison or contrast of one set of observations with another set of similar observations

  15. Research Methodology (RM) • Four kinds of data demand four discrete and different research methodologies • The data dictate the RM • Unless fits a given methodology, it fails to be a research

  16. Research Methodology (RM) • 1-1 correspondence between data and RM • Historical method – for documentary data or literary • Descriptive survey method (normative survey method) – data from observational situation – physically observed or through questionnaire or poll techniques • Analytical survey method – quantitative data that need statistics to extract their meaning • Experimental method – appropriate for data derived from experimental control or pretest-posttest design, two separate groups or one group from which data are derived at two separate intervals

  17. Quantitative vs. Qualitative Research • Quantitative research – is used to answer questions about relationships among measured variables with the purpose of explaining, predicting, and controlling phenomena. This approach is sometimes called the traditional, experimental, or positivist approach • Qualitative research – is used to answer questions about the complex nature of phenomena, often with the purpose of describing and understanding the phenomena from the participants’ point of view. This approach is also referred to as the interpretative, constructivist, or postpositivist approach

  18. Quantitative vs. Qualitative Research • Both approaches involve similar processes (e.g., formation of one or more hypotheses, review of the related literature, collection and analysis of data). Yet these processes are often combined and carried out in different ways, leading to distinctly different research methods • Quantitative researchers usually start with a specific hypothesis to be tested, isolate the variables they want to study, control for extraneous variables, use a standardized procedure to collect some form of numerical data, and use statistical procedures to analyse and draw conclusions from the data

  19. Quantitative vs. Qualitative Research • Qualitative researchersoften start with general research questions rather than specific hypotheses, collect an extensive amount of verbal data from a small number of participants, organise those data into some form that gives them coherence, and use verbal descriptions to portray the situation they have studied

  20. Qualitative vs. Quantitative Research • A quantitative study usually ends with confirmation or disconfirmation of the hypotheses that were tested • A qualitative study is more likely to end with tentative answers or hypotheses about what was observed. These tentative hypotheses may form the basis of future studies (perhaps quantitative in nature) designed to test the proposed hypotheses • In this way,qualitative and quantitative approaches represent complementary components of the research process – appropriate for answering different kinds of questions • As a result, we learn more about the world when we have both quantitative and qualitative methodologies

  21. Qualitative vs. Quantitative Research • Summary of Qualitative vs. Quantitative Approaches • Please refer to Table 5.1 on page 96 of the textbook • Deciding whether to use a quantitative or qualitative approach • Please refer to Table 5.2 on page 106 of the textbook

  22. Other RMs • Action Research • Case and Field Study Research • Correlational Research • Developmental Research • Ex Post Facto or Causal-Comparative • Quasi-Experimental Research

  23. The End • Zainudin Johari B Sc. (Hons) Computer Science, UPM M Sc. Computer Science (Information Systems) UPM Pending Doctoral in Education IT , Open University Malaysia. BIM2313 Research Methods Zainudin Johari

More Related