1 / 25

Research Design: Using Quantitative Methods

Research Design: Using Quantitative Methods. Objectives. By the end of this session you will be able to: Describe the experimental and quasi-experimental research approaches. Formulate appropriate questions and hypotheses. Identify populations and samples.

bschmitt
Télécharger la présentation

Research Design: Using Quantitative Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Design: Using Quantitative Methods

  2. Objectives By the end of this session you will be able to: • Describe the experimental and quasi-experimental research approaches. • Formulate appropriate questions and hypotheses. • Identify populations and samples. • Describe the principles of research tool design (validity and reliability).

  3. Stages in the experimental design process

  4. Identifying issues ‘Good’ research topics might emerge: • From the literature. • Within workplace settings. • From previous projects/assignments. • From a sponsor.

  5. Review the literature What are the origins and definitions of the topic? What are the key sources? What are the main questions/problems that have been addressed to date? The literature review of the research topic What are the major issues and debates about the topic? What is the epistemological and ontological basis for the subject? What are the key theories, concepts and ideas? Source: adapted from Hart (1998)

  6. Develop questions/hypotheses Kerlinger and Lee (2000) argues that a good research question: • Expresses a relationship between variables (e.g., company image and sales levels). • Is stated in unambiguous terms in a question format, and … • Must be capable of being operationally defined (Black, 1993).

  7. Types of applied research questions – with examples

  8. A hypothesis • Is a speculative statement of the relation between two or more variables. • Describes a research question in a testable format which predicts the nature of the answer. • Can be written as a directional statement, such as, ‘When this happens, then that happens’.

  9. Identifying independent and dependent variables • Dependent variables - a variable that forms the focus of research, and depends on another (the independent or explanatory) variable. • Independent variable - used to explain or predict a result or outcome on the dependent variable. • Intervening variable – a hypothetical internal state, used to explain relationship between two observed variables. .

  10. Conducting the study • Planning the design. • Gathering data. • Storing data. • Observing ethical guidelines.

  11. Using descriptive and inferential statistics Descriptive statistics Inferential statistics e.g. • T-test data • Mann-Whitney U data • Chi-square data • Spearman’s rho data • Pearson Product Moment data • ANOVA

  12. Accept or reject the hypothesis • A hypothesis cannot be ‘proved’ to be right – all theories are provisional/tentative (until disproved). • Acceptance or rejection of the hypothesis based upon the weight of statistical evidence and probability entails: • The risk of accepting the hypothesis as true (when it is in fact false). • The risk of rejecting the hypothesis as false (when it is in fact true).

  13. Why the study was conducted What differences were observed between the hypotheses and the results What research questions and hypotheses were evaluated How questions were turned into a research design What conclusions can be drawn – do these support or contradict the hypothesis and existing theories? Preparing the formal report

  14. Experimental design • The researcher has control over the experiment in terms of: • Who is being researched (subjects randomly assigned). • What is being researched. • When the research is to be conducted. • Where the research is to be conducted. • How the research is to be conducted. Typically, researchers often have no control over the ‘who’, having to use pre-existing groups – hence, a quasi-experimental design.

  15. Quasi-experimental designs Quasi-experimental designs are best used when: • Randomization is too expensive, unfeasible to attempt or impossible to monitor closely. • There are difficulties, including ethical considerations, in withholding the treatment. • The study is retrospective and the programme being studied is already underway.

  16. Differences in quantitative research design Differences between experimental, quasi-experimental and non-experimental research

  17. Faulty quantitative designs One group, pre-test/post-test problems: • Maturation effects • Measurement procedures • Instrumentation • Experimental mortality • Extraneous variables

  18. Sound quantitative designs (1) Experimental group with control

  19. Sound quantitative designs (2) Quasi-experimental design with non-equivalent control

  20. Generalizing from samples to populations To generalize, samples must be representative of the population, through: • Random probability sampling (but note problem of sampling error).

  21. Types of probability sample • Simple random sample (where the sampling frame is equal to the population). • Stratified random sample (sampling from strata according to some characteristic e.g., geographical area, age, gender). • Cluster sample (e.g., a county, households in a street, schools in a town, etc.) • Stage sample (cluster sample followed by random selection from cluster).

  22. Non-random sampling • Purposive: Subjects selected against one or more trait. • Quota: Non-random selection of subjects from identified strata until the planned number of subjects is reached. • Convenience or volunteer. • Snowball: Researcher identifies a small number of subjects, who, in turn, identify others in the population.

  23. Instrument design: validity • Internal validity: The extent to which changes in the dependent variable can be attributed to the independent variable. • External validity: This is the extent to which it is possible to generalize from the data to a larger population or setting. • Criterion validity: How people have answered a new measure of a concept, with existing, widely accepted measures of a concept . • Construct validity: The measurement of abstract concepts and traits, such as ability, anxiety, attitude, knowledge, etc.

  24. Instrument design: reliability Reliability is the consistency between two measures of the same thing such as: • Two separate instruments. • Two like halves of an instrument (for example, two halves of a questionnaire). • The same instrument applied on two occasions. • The same instrument administered by two different people.

  25. Summary • Experimental research generally comprises two stages: the planning stage and the operational stage. • Experimental research begins from a priori questions or hypotheses that the research is designed to test. • Research questions should express a relationship between variables. A hypothesis is predictive and capable of being tested. • Dependent variables are what experimental research designs are meant to affect through the manipulation of one or more independent variables. • In a true experimental design the researcher has control over the experiment: who, what, when, where and how the experiment is to be conducted. This includes control over the who of the experiment – that is, subjects are assigned to conditions randomly. • Where any of these elements of control is either weak or lacking, the study is said to be a quasi-experiment. • In true experiments, it is possible to assign subjects to conditions, whereas in quasi-experiments subjects are selected from previously existing groups. • Research instruments need to be both valid and reliable. Validity means that an instrument measures what it is intended to measure. Reliability means that an instrument is consistent in this measurement

More Related