1 / 37

Reference Assessment Programs: Evaluating Current and Future Reference Services

Reference Assessment Programs: Evaluating Current and Future Reference Services. Dr. John V. Richardson Jr. Professor of Information Studies UCLA Department of Information Studies. Presentation Outline. Why Survey Our Users? Question Design and Validity Concerns Methodological Issues

amberly
Télécharger la présentation

Reference Assessment Programs: Evaluating Current and Future Reference Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reference Assessment Programs: Evaluating Current and FutureReference Services Dr. John V. Richardson Jr. Professor of Information Studies UCLA Department of Information Studies

  2. Presentation Outline • Why Survey Our Users? • Question Design and Validity Concerns • Methodological Issues • Mini Case Studies • Recommended Readings

  3. Why Survey Our Users? • Need to know what we don’t know • Satisfaction and dissatisfaction • Loyalty and the Internet • User needs and expectations • Can’t design effective, new programs • Best practices

  4. Question Design and Validity Concerns • Nine issues which must be addressed to insure validity of survey results: • Intent of the question • Clarity of the question • Unidimensionality • Scaling • Number of questions to include • Timing of administration • Question order • Sample sizes

  5. 1. Intent of the Question • RUSA Behavioral Guidelines (1996) • Approachability • Interest in the query, and • Active listening skills • UniFocus (300 factor analyses of the hospitality industry) • Friendliness • Helpfulness or accuracy • Promptness of service

  6. 2. Clarity of the Question • Data from unclear questions • may be invalid • Use instructions • to enhance question clarity

  7. Mini Case Study • What is the literal correct answer to the question posed?

  8. 3. Unidimensionality • Unidimensionality is a statistical concept that describes the extent to which a set of questions all measure the same topic

  9. Constellation of Attitudes • Satisfaction • Delight • Intent to Return • Feelings about Experiences • Value • Loyalty

  10. RUSA Behavioral Guidelines • Approachability • Interest in the query • Active listening skills

  11. 4. Scaling • Three key characteristics: • Does the scale have the right number of points (called response options)? • Are the words used to describe the scale points appropriate? • Is there a midpoint or neutral point on the scale?

  12. A. Response Options • A common four point scale: • Very good, good, fair, and poor • Distance between very good and good is not the same as the distance between fair and poor • Numeric values associated with these options: • 4, 3, 2, and 1 may lead to invalid results…

  13. Mini Case Study • What is the distance between each of these response options?

  14. B1. Scale Anchors • VERY… VERY… • Satisfied Dissatisfied • Much Agree Much Disagree • Positive Negative • Valuable Costly • Enjoyable Unpleasant • Friendly Unfriendly

  15. Mini Case Study • What are the scale anchors here?

  16. B2. Seven Point Scales • Scale A: • Very good Very Poor N/A • 7 6 5 4 3 2 1 0 • Scale B: • Excellent Very Poor N/A • 7 6 5 4 3 2 1 0 • Scale C: • Outstanding Disappointing N/A • 7 6 5 4 3 2 1 0

  17. C. Wording of Options • The only difference in the preceding slide are the response anchors… • Is very good a rigorous enough expectation? • Would excellent be better? • What about outstanding?

  18. Mini Case Study • How many response points are there? • What is the level of expectation?

  19. D. Midpoint or Neutral Point • The rate of skipped questions increases when a neutral response is not included • Use an odd number of response points • Also, a neutral response provides a way to treat missing data

  20. Mini Case Study • What’s the midpoint?

  21. 5. Number of Questions • Short enough • So that users will answer all the questions • Long enough • So that enough information is gathered for decision making purposes

  22. A. Longer Surveys • Take more time and effort on the part of the respondent • High perceived “cost of completion” results in partially or completely unanswered questions in surveys

  23. B. Likelihood of Complete Responses • Higher salience or more important the topic to the user, the greater the likelihood that they will complete a longer survey • Multiple questions measuring a single attitude make for longer surveys, although • They also aid in evaluating user attitudes

  24. 6. Timing and Ease • During or immediately following • Blurring together? • Cards or mail method (IVR=interactive voice response) • Delay seems to cause more positive results • Electronic reference allows for ease of administration (more on PaSS™ later)

  25. 7. Question Order • Specific questions first • Technology, resources, or staffing • More general second • Value, overall satisfaction, intent to return • Halo Effect • Four question survey: one overall and three specific questions • Asking general question last produces better data

  26. Mini Case Study

  27. 8. Sample Sizes • Depends upon population size • Error rate • Confidence • Consult a table of sample sizes

  28. A. Error Rate • Defined as the precision of measurement • Accurate to plus or minus some figure • Has to be precise enough to know which direction service quality is going (i.e., up or down)

  29. B. Confidence • Refers to the overall confidence in the results: • .99 confidence level means that one can be relatively certain that the results are within that range 99% of the time • .95 confidence level is common • .90 confidence level is less common, but… • a 90 CL requires fewer respondents, but will result in a less accurate survey

  30. C. Population and Sample • Population (N) refers to the people of interest • Sample (n) refers to the people measured to represent the population • Response rate is the proportion of the population who respond to the survey

  31. D. Population & Sample Size • N= n= • 100 80 • 200 132 • 500 217 • 1000 278 • 10000 370 • 20000 377 • SOURCE: Robert V. Krejcie and Daryle W. Morgan, “Determining sample size for research activities," Educational and Psychological Measurement 30 (Autumn 1970): 607-610

  32. Appropriate Sample Sizes

  33. Case Studies • Much of the extant surveying of reference service is inadequate, misleading, and can result in poor decision-making • Improving user service means understanding what leads to satisfied and loyal users • Patron Satisfaction Survey (PaSS)™ • http://www.vrtoolkit.net/PaSS.html

  34. Recommended Bibliographies • 1,000 citations to reference studies at • http://purl.org/net/reference • 300 citations to virtual reference studies at • http://purl.org/net/vqa

  35. Best Single Overview • Richardson, “The Current State of Research on Reference Transactions,” In Advances in Librarianship, vol. 26, pages 175-230, edited by Frederick C. Lynden. New York: Academic Press, 2002.

  36. Recommended Readings • Saxton and Richardson, Understanding Reference Transactions (2002) • Most complete list of dependent and independent variables used in the study of reference service • McClure et al., Statistics, Measures and Quality Standards (2002) • Most complete list of measures for virtual reference work

  37. Questions and Answers • What do you want to know now?

More Related