1 / 11

National Evaluation of Specialty Selection

National Evaluation of Specialty Selection. Hywel Thomas and Celia Taylor On behalf of the NESS team: Ian Davison, Steven Field, Harry Gee, Janet Grant, Andy Malins , Laura Pendleton and Elizabeth Spencer

lindy
Télécharger la présentation

National Evaluation of Specialty Selection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. National Evaluation of Specialty Selection Hywel Thomas and Celia Taylor On behalf of the NESS team: Ian Davison, Steven Field, Harry Gee, Janet Grant, Andy Malins, Laura Pendleton and Elizabeth Spencer NESS was commissioned and funded by the Policy Research Programme in the Department of Health (Award number 016 0114). The views expressed are not necessarily those of the Department.

  2. Background • Specialty selection process is one of the hurdles on the way to consultant/GP principal posts • 2009: 11,417 applicants for 6,580 entry-level posts (competition ratio 1.7 to 1) • Selection became increasingly politically sensitive following MTAS • Highlighted need for evolution and evaluation

  3. Aims and scope of NESS • To evaluate the first round of selection for specialty training in 2009 against four key criteria: • Acceptability • Fairness • Effectiveness: Validity and Reliability • Value for money • 13 specialties included in the project • Data collection primarily in 5 deaneries • Did not obtain complete data for every specialty/deanery

  4. Data Sources

  5. Acceptability: “This selection process was fair” * * *

  6. Fairness: Effect of personal characteristics on selection scores Multiple linear regression analysis by specialty Standardised scores so comparability across specialties N=5 specialties and 1,553 candidates

  7. Effectiveness: Predictive validity of shortlisting scores Pearson correlation coefficients: uncorrected and corrected for restriction of range and unreliability of shortlisting scores where possible N=8 specialties, 13 selection processes and 2,411 candidates

  8. Effectiveness: Reliability • Internal Consistency: Cronbach’s alpha by station • N=10 specialties, 26 selection processes and 3,505 candidates • Range 0.35 to 0.83 • 10/26 (38%) in recommended range 0.7 to 0.9 • Inter-rater reliability: Station-level absolute intra-class correlations • N=4 specialties, 4 selection processes and 395 candidates • Range 0.54 to 0.91 • 16/17 (94%) above recommended minimum of 0.7 • Pass-Mark reliability (ignores sub-rules at station-level and only includes candidates attending interview) • N=5 specialties, 7 selection processes and 919 candidates • 12% to 55% of candidates within 1 SEM of appointment cut-off: raises concerns about fairness • 0% to 20% of candidates within 1 SEM of competency cut-off: raises concerns about competency

  9. Pass-Mark reliability example

  10. Value for Money • Costing model developed (http://www.education.bham.ac.uk/research/projects1/dissemination.shtml) • Modified Brogden’s model to estimate cost-benefit: • Cost-benefit depends on: • Selection process design • Predictive validity • Competition ratio • SD of training performance of candidates • Length of training • Drop-out rate • Number requiring extensions to training • Proportion unsuccessful candidates remaining in NHS • Cost estimates for ST1 selection: £3.2m for hospital specialties (£800 per post) and £2.4m for GP (£900 per post) • Cost-benefit estimates - compared to random selection - ranged from £78-97m for hospital specialties and £15-20m for GP

  11. Summary and implications for selection • Largest study of specialty selection • Did not obtain complete data – but no evidence of response bias • High acceptability of selection processes by candidates and assessors • Shortlist scores are a good predictor of selection scores • Long-term follow-up is required on predictive validity, particularly to assess fairness (if scores are predictive then UK-trained candidates will make better trainees but need evidence) • Inter-rater reliability was good – but potential collusion? • Internal consistency and so pass-mark reliability could be improved: more stations with 1 assessor? • Only one specialty had a formal standard setting process to identify competency cut-off • Value for money could only be estimated – but suggests high returns to investment in selection • Selection has continued to evolve since 2009 e.g. increase in nationally-coordinated selection processes

More Related