1 / 22

The NER model to assess accuracy in respeaking

ITU-T Workshop on “ Telecommunications relay services for persons with disabilities ” (Geneva, 25 November 2011). The NER model to assess accuracy in respeaking. Pablo Romero-Fresco (Roehampton University, CAIAC research centre) Juan Martínez (Respeaking consultant). Accuracy in Respeaking.

fiona
Télécharger la présentation

The NER model to assess accuracy in respeaking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ITU-T Workshop on“Telecommunications relay services for persons with disabilities”(Geneva, 25 November 2011) The NER model to assess accuracyin respeaking Pablo Romero-Fresco (Roehampton University, CAIAC research centre) Juan Martínez (Respeaking consultant)

  2. Accuracy in Respeaking Quality in respeaking Delay Accuracy

  3. Accuracy in Respeaking 97-98% accuracy

  4. Basic requirements for a model 1) Functional and easy to apply 2) Include the basic principles of WER calculations in SR 3) Different programmes, different editing 4) Possibility of edited and yet accurate respeaking 5) Compare subtitles with original spoken text 6) Include other relevant info (delay, position, speed) 7) Provide both percentage and food for thought in training

  5. Traditional WER methods US National Institute of Standards and Technology N - Errors Accuracy Rate ------------------------ × 100 = % N But...

  6. Traditional WER methods US National Institute of Standards and Technology N - Errors Accuracy Rate ------------------------ × 100 = 16% N But...

  7. Spain = SDH guidelines Different European countries UN Accessibility Focus Group

  8. NER Model 205 – 3 – 2 Accuracy ------------------------ × 100 = 98.6% 205

  9. NER Model 226 – 13 – 1 Accuracy ------------------------ × 100 = 93.8% 226 Assessment: poor editing (not quantity, but quality)

  10. NER Model 257 – 1 – 13 Accuracy ------------------------ × 100 = 94.3% 257 Assessment: poor recognition (including serious mistakes)

  11. WGBH: “There is a wide range of error types in real time captioning and they are not all equal in their impact to caption viewers”. “Treating all errors the same does not provide a true picture of caption accuracy”.

  12. Types of errors (feedback from DTV4ALL project) 1) “There are errors, yes, but you can easily figure out what the correct form was meant to be. Now I’m bilingual –I can speak English and teletext” 2) “Live subtitles? - Sound like gobbledygook to me” 3) “As far as I’m concerned they are not errors, but lies”

  13. Types of errors (feedback from DTV4ALL project) 1) Minor edition or recognition errors (0.25) 2) Normal edition or recognition errors (0.5) 3) Serious errors (1)

  14. Minor Errors

  15. Standard Errors

  16. Serious errors

  17. Serious errors

  18. NER MODEL

  19. NER Orange with apples

  20. Graciñas Pablo Romero-Fresco (p.romero-fresco@roehampton.ac.uk)

  21. ITU-T Workshop on“Telecommunications relay services for persons with disabilities”(Geneva, 25 November 2011) The NER model to assess accuracyin respeaking Pablo Romero-Fresco (Roehampton University, CAIAC research centre)(p.romero-fresco@roehampton.ac.uk) Juan Martínez (Respeaking consultant)

More Related