1 / 14

Unanswered Questions in Typical Literature Review

Unanswered Questions in Typical Literature Review. 1. Thoroughness How thorough was the literature search? Did it include a computer search and a hand search? In a computer search, what were the descriptors used? Which journals were searched?

trodriquez
Télécharger la présentation

Unanswered Questions in Typical Literature Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Unanswered Questions inTypical Literature Review • 1. Thoroughness • How thorough was the literature search? • Did it include a computer search and a hand search? • In a computer search, what were the descriptors used? • Which journals were searched? • Were theses and dissertations searched and included?

  2. 2. Inclusion/Exclusion • On what basis were studies included or excluded from the written review? • Were theses and dissertations arbitrarily excluded? • Did the author make decisions about the inclusion or exclusion based on the • perceived internal validity of the research? • sample size? • research design? • use of appropriate statistics?

  3. 3. Conclusions • Were conclusions based on the number of studies supporting or refuting a point (called vote counting)? • Were studies weighted differently according to sample size? • meaningfulness of the results? • quality of the journal? • internal validity of the research?

  4. Establishing Cause and Effect • The selection of a good theoretical framework • The application of an appropriate experimental design • The use of the correct statistical model and analyses • The proper selection and control of the independent variable • The appropriate selection and measurement of the dependent variable • The use of appropriate subjects • The correct interpretation of the results

  5. Reliability and Validity • Reliability - the consistency or repeatability of a measure; is the measure reproducible • Validity - the truthfulness of a measure; validity depends on reliability and relevance

  6. Reliability • The observed measure is the summation of the true score and the error score • The more reliable a test, the less error is involved • Reliability is defined as the proportion of observed score variance that is true score variance • Reliability is determined using Pearson’s correlation coefficient or ANOVA

  7. Types of Reliability • Interclass reliability - Pearson’s Product Moment (Correlation of only two variables) • Test -retest reliability (stability)-determines if a single test is stable over time • Equivalence reliability - are two tests similar in measuring the same item or trait • Split halves reliability - estimates the reliable of a test based on the scores of the odd and even test items • Spearman Brown prophecy - estimates test reliability based on addition or deletion of test items

  8. Intraclass Reliability • Reliability within an individual’s scores of more than two measures • Cronbach’s alpha estimates the reliability of tests. • Uses AVOVA to determine mean differences within and between an individual’s scores

  9. Indices of Reliability • Index of reliability - the theoretical correlation between true and observed scores; IR= rxx • Standard Error of Measure - SEM the degree of fluctuation of an individual’s observed score from their true score SEM = s(1-rxx)

  10. Factors Affecting Reliability • Fatigue • Practice • Ability • Time between tests • Testing Circumstances • Test Difficulty • Type of measurement • Environment

  11. Validity • Content validity - face validity, logical validity • Criterion Validity- measures are related to specific criterion • Predictive validity • Concurrent validity • Construct validity - test validity as a measure of psychological constraints

  12. The Relationship between Reliability and Validity

  13. Possible Test Items 1 • Be able to define and differentiate between reliability and validity; what types of error do each try to explain. • Know the different classes and types of reliability; be able to differentiate between the different scores that are included in reliability. • Be able to calculate the reliability of test examples and describe what type/class of reliability is being defined.

  14. Possible Test Items 2 • Be able to define/describe/determine Cronbach’s Alpha, Reliability Index, and the Standard Error of Measurement. • Know what factors affect test reliability and how to compensate to make a more reliable test. • Be able to describe, define and differentiate between the types of validity • Be able to describe the different methods for developing a criterion for the evaluation of validity and give examples of the different types of criterion validity.

More Related