1 / 56

Developments in the use of OSCEs to assess clinical competence

Developments in the use of OSCEs to assess clinical competence. The Third International Conference on Medical Education in the Sudan. Katharine Boursicot, BSc, MBBS, MRCOG, MAHPE Reader in Medical Education Deputy Head of the Centre for Medical and Healthcare Education

briana
Télécharger la présentation

Developments in the use of OSCEs to assess clinical competence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Developments in the use of OSCEs to assess clinical competence The Third International Conference on Medical Education in the Sudan Katharine Boursicot, BSc, MBBS, MRCOG, MAHPE Reader in Medical Education Deputy Head of the Centre for Medical and Healthcare Education St George’s, University of London

  2. Introduction of OSCEs Poor reliability: too small sample (1 or 2 cases); selection of cases random; examiner variation, unstructured marking • Why was the OSCE developed? • Dissatisfaction with traditional forms of assessing clinical competence • Long cases • Short cases • Vivas • Reliability and validity issues • Burgeoning field of medical education and psychometrics Poor reliability : too much variation in cases selected; same examiners followed candidate for all cases, unstructured marking Poor validity: tests knowledge, not clinical skills. Poor reliability: random content; examiner variation, unstructured marking

  3. Characteristics of assessment instruments • Utility = • Reliability • Validity • Educational impact • Acceptability • Feasibility Reference Van derVleuten, C. The assessment of professional competence: developments,research and practical implications Advances in Health Science Education 1996, Vol 1: 41-67

  4. Test characteristics • Utility = • Reliability • Validity • Educational impact • Acceptability • Feasibility • Reliability of a test/ measure • reproducibility of scores • capability of differentiating consistently between good and poor students

  5. Reliability • Competencies are highlydomain-specific • Broad sampling is required to obtain adequate reliability • Need to sample candidate’s ability over several tasks • Need to have multiple observers

  6. Blueprinting • Enables test to map to the curriculum outcomes • Allows sufficient sampling and specification of content (i.e. not random cases)

  7. Traditional ‘bluepriting’

  8. Modern OSCE blueprint

  9. Number of assessors/observers per station • Is one good enough? • Wouldn’t two be better than one? • Why not have the assessor follow the candidate around the stations?

  10. Reliability of scores on an OSCE 2 new examiners for each case 1 new examiner for each case Reliability Same examiner for each case Hours of Testing Time

  11. Behaviour~ skills/attitudes Does Knowshow Cognition~ knowledge Knows Showshow Model of competence Professional authenticity Miller GE. The assessment of clinical skills/competence/performance. Academic Medicine (Supplement) 1990; 65: S63-S67.

  12. Performance assessment in vivo: mini-CEX, DOPS, Video Does Performance assessment in vitro: OSCE, OSLER Knows how (Clinical) Context based tests: SBA, EMQ, SAQ Knows Factual tests: SBA -MCQ Shows how Testing formats Does Shows how Knows how Knows

  13. Utility = • Reliability • Validity • Educational impact • Acceptability • Feasibility Test characteristics: Educational impact Relationship between assessment and learning Curriculum Assessment Teacher Student OSCE: encourages students to learn to perform the required skills

  14. Advantages of using OSCEs in clinical assessment • Careful specification of content = Validity • Observation of wider sample of activities =Reliability • Structured interaction between examiner and student =Reliability • Structured marking schedule =Reliability • Each student has to perform the same tasks =Acceptability(fairer test of candidate’s clinical abilities)

  15. Disadvantages of using OSCEs in clinical assessment – practical issues • Cost • Feasibility issues • Major organisational task • Requires many examiners to be present • Requires suitable facilities

  16. Disadvantages of using OSCEs in clinical assessment – academic issues • In-vitro situation: testing competence, not testing actual performance in vivo with patients • Deconstruction of doctor-patient consultation into component parts – not holistic

  17. Summary of reasons for use of OSCEs • OSCEs test clinical and communication skills - clinical competence • Reliable • Fair test for all candidates • Organisational and resource implications • Should be used as part of a programme of assessment

  18. OSCEs • OSCEs have become very widespread • Undergraduate • Postgraduate • Medicine • Other clinical professions

  19. OSCE circuit

  20. Clinical examination skills

  21. Clinical examination skills

  22. Communication skills

  23. Suturing

  24. IV cannulation

  25. Ophthalmoscopy

  26. Dentistry: History taking

  27. Drilling

  28. Laying up sterile equipment

  29. Fitting a matrix

  30. Veterinary medicine: setting up for X-Ray

  31. Intubation of dog

  32. Examination of foreleg of horse

  33. Administration of injection

  34. Trimming of sheep hoof

  35. Sampling of milk/mastitis in cow

  36. Infection control

  37. Current issues • Problematic OSCEs • Authenticity • Standard setting • Compensation • SP recruitment, scripts and training • Examiner training

  38. 1. Problematic OSCEs Suboptimal practices which undermine the usefulness of using the OSCE format • Too few stations (less than 12) – poor reliability • Poorly blueprinted OSCE – insufficient spread of skills being tested - poor reliability and validity • MCQs/ SAQs/vivas disguised as OSCEs – poor validity • Poorly constructed stations (lack of congruence) - poor validity and reliability • Inconsistency in conduct of station (site, examiner, SP, equipment, timing) - poor reliability

  39. 2. Authenticity • Why should all stations last 5 minutes? • Not authentic • Some tasks require more time • Too little time = truncated ‘samples’ of clinical skills = NOT holistic • Match clinical situations as closely as possible • 5, 10, 15, 20 minutes

  40. 3. Standard setting • Process by which decisions are made about pass/fail • Wide variety of processes • Arbitrary • Overall judgement of examiner at each station • Angoff for each station • Complicated schemes involving grades • Gold standard method for OSCEs.......

  41. Performance-based standard setting methods • These are the GOLD Standard for OSCEs • Borderline group method • Contrasting group method • Regression based standard method

  42. 1 2 3 4 5 Regression based standard Checklist X= passing score 1. Hs shjs sjnhss sjhs sjs sj 2. Ksks sksmsiqopql qlqmq q q qkl 3. Lalka kdm ddkk dlkl dlld 4. Keyw dd e r rrmt tmk 5. Jfjfk dd 6. Hskl;s skj sls ska ak akl ald 7. Hdhhddh shs ahhakk as TOTAL  Checklist Score  X    Overall rating 1 2 3 4 5 Clear Borderline Clear Excellent Outstanding fail pass 1 = Clear fail 2 = Borderline 3 = Clear pass 4 = Excellent 5 = Outstanding

  43. Performance-based standard setting • Advantages • Utilises the expertise of the examiners • they are observing the performance of the students • they are in a position to make a (global) judgement about the performance • based on • their clinical expertise • expected standards for the level of the test • knowledge of the curriculum/teaching

  44. Performance-based standard setting • Advantages • Large number of examiners set a collective standard while observing the candidates – not just an academic exercise • Reliable: cut-off score based on large sample of judgments • Credible and defensible: based on expert judgment in direct observation

  45. Performance-based standard setting • Disadvantages • Requires large (ish) cohort of candidates to achieve enough numbers in the ‘borderline’ group • Passing score not known in advance • Judgments may not be independent of checklist scoring • Requires expert processing of marks immediately after the exam • Checking of results • Delay in producing results

  46. 4. Compensation • A number of students achieve a totalscoreabove the overall pass markBUT • faila significant number of stations • Poor performance on a number of stations is compensated by good performance on other stations

  47. Concerns • Students do not have an acceptable minimum standard across a range of skills • Students could proceed to graduation without having acceptable clinical examination or history taking skills

  48. Criterion 2: Must pass ≥10/19 stations Two extra students failed Criterion 1: Pass mark 60.3 (BGM)

More Related