40 likes | 134 Vues
This study delves into integrating web accessibility metrics into an evaluation tool, such as Wab Score, Failure Rate, UWEM Score, highlighting the challenges and solutions. Explore the impact of human filtering and extra parameters on accuracy, and propose ideas for metric integration improvement. Categorize metrics into levels to streamline evaluations and automate basic criteria, focusing on enhancing the user experience. Learn how limiting scope can improve accuracy and practical implementation for scalability.
E N D
Integration of Web Accessibility Metrics in a semi-automatic evaluation process Maia Naftali Osvaldo Clúa
Introduction • What has been done? • Implement existing metrics in an accessibility evaluation tool. • Wab Score • Failure Rate • UWEM Score • The evaluation process is semi-automatic: includes a human filter. • What for? • Compare results from different sites. • Analyze in a real scenario the difficulties of calculating metrics automatically.
Difficulties found • Major difficulties • Metric accuracy: • Exact formulas with a variable input: how to achieve repeatable results when including human criteria. • Human Filtering is useful, but requires extra work and depends on evaluator’s knowledge, which is not ideal to implement in a large-scale scenario. • Extra parameters: • Calculating some parameters of the formulas that are not directly retrieved with the evaluation results can introduce an error. For example, the Failure Points. • Threshold criteria and tool accuracy: • Guideline checkpoints that are hard to test with an algorithm might add noise to metrics computing. • Not all the checkpoints are tested in some evaluation tools.
Ideas to work on for metric integration • Metric categorization into levels • A possible categorization • Basic: These metrics should only use the checkpoints that can be assessed automatically (with an algorithm). For example: Does the ALT tag has the IMG tag?. • Semantic or extended: Metrics that use the entire set. • Pragmatic: Also measure the user experience. • Motivation: automatic tools will be able to calculate the metrics defined as “basic”, with a known error rate. Limiting the scope in the metrics input will facilitate their programmatic implementation. Therefore, any evaluation tool could calculate metrics at a know level of accuracy.