1 / 31

Best Practices in Job Analysis

2007 Annual Conference. Best Practices in Job Analysis. Speakers: Patricia Muenzen Lee Schroeder Moderator: Sandra Greenberg. The Survey Experience. What makes people take surveys? What is their experience with both paper and pencil and computer-delivered surveys?

zinnia
Télécharger la présentation

Best Practices in Job Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 2007 Annual Conference Best Practices in Job Analysis Speakers: Patricia Muenzen Lee Schroeder Moderator: Sandra Greenberg

  2. The Survey Experience • What makes people take surveys? • What is their experience with both paper and pencil and computer-delivered surveys? • How can you design and administer the best possible survey? Council on Licensure, Enforcement and Regulation

  3. Components of the Survey Experience • Survey content (what) • Survey sampling plan (who) • Survey administration mode (how) • Survey taker experience (where, when, why) Council on Licensure, Enforcement and Regulation

  4. Designing a Successful Survey • Survey Content (what) • Statements to be rated • Rating scales • Demographic data collection • Qualitative data collection Council on Licensure, Enforcement and Regulation

  5. Survey Content – Statements to be Rated • Statements to be rated may include domains of practice, tasks/activities, and knowledge and skills • How thorough is your development process? Who is involved? What process do they use? Council on Licensure, Enforcement and Regulation

  6. Survey Content – Rating Scales • Rating scales may include time spent/frequency, importance/criticality, proficiency at licensure • How do you select your rating scales? What questions to you want to answer? How many scales do you use? Council on Licensure, Enforcement and Regulation

  7. Survey Content – Demographic Data Collection • Demographic questions may include years of experience, work setting, educational background, organizational size • What do you need to know about your respondents? How might respondent background characteristics influence their survey ratings? Council on Licensure, Enforcement and Regulation

  8. Survey Content – Qualitative Data Collection • Open-ended questions might address items missing from the survey, changes in the practice of the profession, reactions to the survey • What information do your respondents need to provide that is not available through closed questions? Council on Licensure, Enforcement and Regulation

  9. Delivering a Successful Survey – Sampling Plan • Survey sampling plan (who) • Who do you want to take your survey? What should your sample size be? Should you stratify? Council on Licensure, Enforcement and Regulation

  10. Delivering a Successful Survey – Administration • Survey administration mode (how) • Delivery could be via a traditional paper survey (TPS) or a computer-based survey (CBS) • Which administration mode should you select? Council on Licensure, Enforcement and Regulation

  11. TPS Probably more likely that invitation to participate is delivered More rating scales per page Better with non-technologically sophisticated audience CBS Cheaper Quicker Logistically easier to do versioning Inexpensive options for visual display (color, graphics) Qualitative questions: more responses & easier to read In-process data verification TPS v CBS Council on Licensure, Enforcement and Regulation

  12. Comparability of Survey Administration Modes • An empirical investigation of two data collection modes (Traditional Paper Survey (TPS) and Computer-Based Survey (CBS)) Council on Licensure, Enforcement and Regulation

  13. General Survey Characteristics • Two data collection modes were used (Traditional Paper Survey (TPS) and Computer-Based Survey (CBS)) • A total of 12,000 were sampled • 6,000 TPS • 6,000 CBS • Total of 150 activities rated for frequency and priority • Approximately 20 demographic questions Council on Licensure, Enforcement and Regulation

  14. Three General Research Questions • Is there a difference in response rates across modes? • Are there demographic differences across the administration modes (e.g., do younger people respond more frequently to one mode over another)? • Are there practical administration mode differences in developing test specifications? Council on Licensure, Enforcement and Regulation

  15. Response Rate • TPS • Out of 6,000, 263 were removed due to bad addresses and limited practice • 1,709 were returned for an adjusted return rate of 30% • CBS • Out of 6,000, 1,166 were removed due to bad email addresses and limited practice • 1,115 were returned for an adjusted return rate of 21% Council on Licensure, Enforcement and Regulation

  16. Response Rate • With the aggressive mailing (5-stages, incentives, and reduced survey length) the TPS had a 9% higher response rate in this study compared to the CBS • That being said, return rates at or above 20% for unsolicited surveys are typical • The affect of Spam filters is unknown Council on Licensure, Enforcement and Regulation

  17. Demographic Differences • Very few differences • Appears as the administration mode had no affect on population Council on Licensure, Enforcement and Regulation

  18. Demographic DifferencesSample geographic location question Council on Licensure, Enforcement and Regulation

  19. Demographic Differences • Do the activities represent what you do in your position? • Slight difference in perception, although very high for both modes. Council on Licensure, Enforcement and Regulation

  20. Rating Scales • Mean Frequency Ratings (0 to 5) six point scale (150 tasks) • CBS mean= 2.24 • TPS mean= 2.18 • Mean Priority Ratings (1 to 4) four point scale (150 tasks) • CBS mean= 2.96 • TPS mean= 2.95 Council on Licensure, Enforcement and Regulation

  21. Test Specifications • Small differences were observed in the mean ratings • Do these differences affect decisions made on test content? Council on Licensure, Enforcement and Regulation

  22. Test Specifications • Evaluated inclusion criteria for final outline • Created artificial task exclusion criterion (1.25 Standard Deviation units below the mean for each administration modality) • Frequency: • CBS mean = 2.24 cutpoint = .83 • TPS mean = 2.18 cutpoint = .78 • Priority • CBS mean = 2.96 cutpoint = 2.40 • TPS mean = 2.95 cutpoint = 2.40 • Activities above the cutpoint are included; those below are excluded from the final content outline Council on Licensure, Enforcement and Regulation

  23. Test Specifications- Frequency ratings over 99% Classification Accuracyonly 1 difference in activities excluded (task # 68 on CBS and task 29 on CPS) Council on Licensure, Enforcement and Regulation

  24. Test Specifications- Priority ratings over 99% Classification Accuracy Council on Licensure, Enforcement and Regulation

  25. Results of Test Specification Analysis • Number of misclassifications approaches random error • Differences are within a standard error of the task exclusion cutpoint • Because near standard error, are reviewed by the committee for final inclusion Council on Licensure, Enforcement and Regulation

  26. Conclusions • Based on this limited sample and may not generalize • Response rate higher for TPS • No differences in respondent sample (demographics) • TPS group had slightly more agreeable opinion of elements • Most importantly, there were no practical differences in final test specification development Council on Licensure, Enforcement and Regulation

  27. Conclusions continued • Cost • TPS-Over $6.00 postage and printing for each TPS (5-stage mailing) plus mailing labor. A conservative estimate would be $6.50 per unit or in this case $39,000. (estimate excludes data entry and scanning) • CBS- initial cost for survey setup and QC. No postage, printing, scanning and limited labor after initial setup. Cost is probably less that $10,000 for the administration of this type of survey Council on Licensure, Enforcement and Regulation

  28. Delivering a Successful Survey - User experience • Survey taker experience (where, when, why) • To enhance user experience, survey should be maximally: • Accessible • Visually appealing • Easy to complete • Relevant Council on Licensure, Enforcement and Regulation

  29. User experience – Suggestions • To reduce time demands, create different versions of survey • To ensure questions are correctly targeted to respondent subgroups, use routing • To motivate respondents, use PR campaign and participation incentives Council on Licensure, Enforcement and Regulation

  30. Discussion • What best practices can you share with us regarding: • Survey content (what)? • Survey administration mode (how)? • Survey sampling plan (who)? • Survey taker experience (where, when, why)? Council on Licensure, Enforcement and Regulation

  31. Patricia M. Muenzen Director of Research Programs Professional Examination Service 475 Riverside Drive, 6th Fl. New York, NY 10115 Voice: 212-367-4273 Fax: 917-305-9852 pat@proexam.org www.proexam.org Dr. Lee L. Schroeder President Schroeder Measurement Technologies, Inc. 2494 Bayshore Blvd., Suite 201, Dunedin, FL 34698 Voice: 727-738-8727 Toll Free: 800-556-0484 Fax: 727-734-9397 e-mail: lschroeder@smttest.com www.smttest.com Speaker Contact Information Council on Licensure, Enforcement and Regulation

More Related