670 likes | 852 Vues
Candidate: Markel Vigo Echebarria Advisor: Julio Abascal González. Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective. Donostia , November 23rd 2009. Putting in context.
E N D
Candidate: Markel Vigo Echebarria Advisor: Julio AbascalGonzález Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective Donostia, November 23rd 2009
Putting in context • About 10% of the world’s population lives with a disability • The WWW is not accessible • Web accessibility guidelines • A number of motivations for a barrierfree Web • Evidence shows “guidelines are not enough” • Interaction context has to be captured • Technological gap: automatic tools do not consider context • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Outline • Motivation • Web Accessibility Evaluation • Web Accessibility Measurement 3.1Web Accessibility Quantitative Metric 3.2Deploying accessibility scores in Search Engines • Contextual Web Accessibility Assessment • Device-tailored Accessibility Assessment • User-tailored Accessibility Assessment • Conclusions • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Motivation • Conformance to guidelines is a minimum requirement for developing accessible sites • Evaluation is a key stage • Automatic tools help developers • A comprehensive assessment requires expert involvement • Again, “guidelines are not enough” • Hypothesis: “we include user’s interaction context in the assessment process of web accessibility, results will better capture user experience” • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Outline • Motivation • Web Accessibility Evaluation • Web Accessibility Measurement 3.1 Web Accessibility Quantitative Metric 3.2 Deploying accessibility scores in Search Engines • Contextual Web Accessibility Assessment • Device-tailored Assessment • User-tailored Assessment • Conclusions • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility Evaluation • There are a number of accessibility guideline sets • Focus on disability • Access devices • Application environment • A tool for coping with them requires • Evaluation engine independent of guidelines • A language to frame them • A set of guidelines has been studied to find patterns • General & desktop: WCAG 1.0, Section 508, IBM Accessibility Guidelines • Mobile devices: Mobile Web Best Practices 1.o • Target group of users: Elderly [Kurniawan & Zaphiris, 2005] • Specific environments: IMS guidelines for accessible learning applications • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility Evaluation • 21 patterns were found for XHTML test cases • 6 require checking XHTML elements • 11 require element and attributes • 4 of them are complex relationships • The Unified Guidelines Language (UGL) has been defined • XML-Schema that frames all test cases • Expressive • For evaluation purposes UGL are transformed into XQuery • Each test case has a corresponding XQuery template • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility Evaluation • a) input type=“img”alt • b) input name=“go”alt • Test-case 17: “a specific value of an attribute requires another non-empty attribute ” UGL UGL • <label> </label> • <analysis_type>check attribute</analysis_type> • <related_attribute> • <atb> </atb> • <analysis_type>value</analysis_type> • <content test = “ ”> </content> • <related_attribute> • <atb></atb> • <analysis_type>compulsory</analysis_type> • </related_attribute> • </related_attribute> • <label>input</label> • <analysis_type>check attribute</analysis_type> • <related_attribute> • <atb>name</atb> • <analysis_type>value</analysis_type> • <content test = "=">go</content> • <related_attribute> • <atb>alt</atb> • <analysis_type>compulsory</analysis_type> • </related_attribute> • </related_attribute> input type img = alt XQuery template • //???[@??? test “???” and not(@???)FAIL a) XQuery • //input[@type= “img” and not(@alt)] FAIL • //input[@name= “go” and not(@alt)] FAIL b) XQuery • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility Evaluation • An interactive web application for accessibility guidelines • A front-end for UGL guidelines • Creation, edition and sharing • Working jointly with an evaluation tool <checkpoints id="2" title="HTML elements and their attributes"> <priority>1</priority> <description/> <evaluation_type>auto-semi</evaluation_type> <techniques id="1"> <type>HTML</type><description>Compulsory</description> <test_case id="7"> <type>7</type> <evaluation_type>auto</evaluation_type> <evaluation_result>error</evaluation_result> <element> <label>IMG</label> <test_e>check attribute</test_e> <related_attribute> <atb>alt</atb> <test_a>compulsory</test_a> </related_attribute> </element> </test_case> </techniques> <techniques id="2”> <test_case id="8"> <type>8</type> <evaluation_type>auto</evaluation_type> <evaluation_result>error</evaluation_result> <element> <label>FRAME</label> <test_e>check attribute</test_e> <related_attribute> <atb>title</atb> <test_a>compulsory</test_a> <content analysis="not empty"/> UGL • Developers cannot be forced to use UGL • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility EvaluationGuidelines Management Framework 1. select textarea element “For each @id in textarea check there is a label where @for=@id” • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility EvaluationGuidelines Management Framework 2. select id attribute “For each @id in textarea check there is a label where @for=@id” • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility EvaluationGuidelines Management Framework 3. select label element “For each @id in textarea check there is a label where @for=@id” • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility EvaluationGuidelines Management Framework 4. select for attribute “For each @id in textarea check there is a label where @for=@id” • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility EvaluationGuidelines Management Framework 5. define element order “For each @id in textarea check there is a label where @for=@id” • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility EvaluationGuidelines Management Framework • Search for existing guidelines • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility EvaluationGuidelines Management Framework • Retrieve guidelines • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility EvaluationGuidelines Management Framework • Evaluate web content • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
2. Web Accessibility Evaluation 2: guidelines are stored in a remote repository 1: user A creates, searches, shares guidelines 3: guidelines are transformed into UGL 4: UGL are decomposed into XQuery 5: user B selects guidelines and evaluates web page server · create · search · share · update browser 1 2 Guidelines repository Definition manager Evaluation component user A 3 Guidelines pre-processor · select guidelines · evaluate XQuery1 4 user B XQuery2 get XQuery2 5 ... XQueryi get XQueryn ... http://www.foo.com XQueryn • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Summary for Evaluation • A declarative language to frame accessibility guidelines is defined • An interactive application allows non-expert users to manage guidelines • An evaluation engine works jointly with the management framework resulting in a cooperative tool for accessibility guidelines • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Outline • Motivation • Web Accessibility Evaluation • Web Accessibility Measurement 3.1Web Accessibility Quantitative Metric 3.2 Deploying accessibility scores in Search Engines • Contextual Web Accessibility Evaluation • Device-tailored accessibility • User-tailored accessibility • Conclusions • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
3. Web Accessibility Measurement • Most broadly accepted conformance scores are WCAG 1.0qualitative ones (0, A, AA, AAA) • Based on the assumption that if a test is violated in a level the page fails to meet such level • We need more than accept/reject measure quantitative metrics • Some scenarios require automatically obtained numeric scores • QA and measure of updates within Web Engineering • Accessibility Observatories • Information Retrieval • Adaptive hypermedia techniques • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
3.1 Web Accessibility Quantitative Metric • Failure-rate is calculated for all WCAG 1.0 checkpoints • Leads to having normalized scores • The ratio between potential and actual errors piles up close to 0 • A hyperbole is applied to spread out these rates • An approach to the hyperbole • Impact of WCAG 1.0 checkpoints is quantified • The failure-rate for semi-automatic issues is estimated hyperbole approach • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
3.1 Web Accessibility Quantitative Metric for i in each checkpoint in a guideline {P,O,U,R} loop for j in each type of checkpoint {auto,semi} loop for k in each priority{1,2,3} loop Ai,j=calculate_failure_rate()*priority_weight(k) end for end for Ai=(Ni,auto*Ai,auto+Ni,semi*Ai,semi)/Ni end for A=(NP*AP+NO*AO+NU*AU+NR*AR)/N WAQM algorithm • Evaluation is carried out against WCAG 1.0 • Failure-rates of each checkpoints are grouped according to their • WCAG 2.0 principle membership • reported problem type • WCAG 1.0 priorities • All subgroups are merged weighting them with the number of checkpoints in each subgroup • As a result A score for accessibility is obtained • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
3.1 Web Accessibility Quantitative MetricValidation • Experts assessed the accessibility of 14 home pages • EvalAccess tool was used to evaluate and WAQM was applied • Strong positive correlation found between numeric expert assessment and WAQM r(14)=0.56, p<0.05 • Validation by experts • Testing reliability: reproducibility and consistency • 1363 pages from 15 sites were automatically evaluated with EvalAccess and LIFT tools • Very strong rank correlation between sites ρ(15)= 0.74 and between all pages ρ(1363)= 0.72 • No correlation was found between absolute values. A method for parameter tuning is proposed. • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
3.1 Web Accessibility Quantitative MetricValidation Before tuning After tuning • Results with parameter tuning are more similar and balanced for absolute values EvalAccess LIFT • while keeping strong correlation for rankings based on scores ρ(1449)=0.64, p<.000 • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Outline • Motivation • Web Accessibility Evaluation • Web Accessibility Measurement 3.1 Web Accessibility Quantitative Metric 3.2Deploying accessibility scores in Search Engines • Contextual Web Accessibility Assessment • Device-tailored Assessment • User-tailored Assessment • Conclusions • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
3.1 Web Accessibility MeasurementDeploying accessibility scores in Search Engines • In a study with blind users (Ivory et al, 2004) concluded that it would be useful • WAQM was incorporated into Information Retrieval systems 5 1 • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
3.1 Web Accessibility MeasurementDeploying accessibility scores in Search Engines • A study was conducted to observe how commercial search engines behave with respect to accessibility • Google and Yahoo! search were deployed and their results ranked according to accessibility scores • Compared with Google and Yahoo! without re-ranking • 12 queries from a corpus used for IR experiments were used • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
3.1 Web Accessibility MeasurementDeploying accessibility scores in Search Engines • Results show that, • First 10 URLs provided by Yahoo and Google score pretty high • Reinforcing Pemberton’s (2003) statement on the visibility of accessible pages • Commercial search engines do not rank results according accessibility though • Yahoo! shows a tendency although results are not definitive • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Summary for Measurement • WAQM produces numeric scores to measure accessibility • WAQM is valid and reliable • It is concluded that top 10 results produced by traditional search engines score high although not ranked according to accessibility • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Outline • Motivation • Web Accessibility Evaluation • Web Accessibility Measurement 3.1 Web Accessibility Quantitative Metric 3.2 Deploying accessibility scores in Search Engines • Contextual Web Accessibility Assessment • Device-tailored Assessment • User-tailored Assessment • Conclusions • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. Contextual Web Accessibility • Even if pages meet traditional accessibility guidelines users still find problems. • Selecting those guidelines that impact on the user is not enough • Multiple group membership is not supported by tools • Group guidelines do not capture individual needs • Guidelines contain unresolved references to user’s delivery context • Guidelines are dependent on user agents because UAAG are not met • 3 goals to capture interaction context • Goal 1. Application of multiple guideline sets • Goal 2. Overcome limitations of User Agents • Goal 3. Capture delivery context • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. Contextual Web Accessibility • Capturing the interaction context and completing guidelines with it leads to personal web accessibility • Scenarios that would benefit frompersonal accessibility • END-USERS • Personalized Information Retrieval Systems • Adaptive navigation support • DEVELOPERS • Developing Websites for Specific Audiences and Devices • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility • The framework for context-tailored assessment requires: • A vocabulary to univocally identify context features • Gathered info is put in a CC/PP profile • CC/PP vocabulary is limited but extensible • The 5 guideline sets (WCAG, IBM, MWBP, Elderly and Learning) have been analyzed in order to find their dependencies with respect to context • A vocabulary is created with those features that refer to context in accessibility guidelines • Same concepts from other vocabularies have been borrowed • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility – creating a vocabulary for context profiles • Goal 2. With respect to ATs two types of dependencies are identified: • Negative dependencies: older versions may suffer accessibility problems even if guidelines are met false negatives • Positive dependencies: new features of ATs make some accessibility barriers obsolete false positives • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility – creating a vocabulary for context profiles Goal 3. Those references that guidelines make to the delivery context are captured • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility • Goal 1: Multiple guideline sets repository of UGL guidelines UGL repository • Goal 2:Overcome user agent limitations A detector of installed ATs Assistive Technologies Detector • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility • Goal 3:Capture the Delivery Context • Device information retrieval from heterogeneous repositories • UAProf profiles: extended CC/PP profiles describing device features • WURFL profile: XML file containing device descriptions • Device Atlas: device description files Device Information Retriever Jena UAProf API WURLF Device Atlas JSON • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility • Extracted information is put in a CC/PP profile using the defined vocabulary Device Atlas WURLF UAProf Assistive Technologies detector Device Information Retriever <software features/> <hardware features/> <assistivetechnologies/> CC/PP Profile • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility –creating a vocabulary for context profiles <rdf:RDF xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:ccpp="http://www.w3.org/2000/07/04-ccpp#" xmlns:access="http://sipt07.si.ehu.es/profiles/2008/access-schema#"> <rdf:Descriptionrdf:about="http://sipt07.si.ehu.es/profiles/2008/user_0017"> <ccpp:componentrdf:resource="http://www.w3.org/2000/07/04-ccpp#HardwarePlatform"/> <ccpp:componentrdf:resource="http://www.w3.org/2000/07/04-ccpp#SoftwarePlatform"/> <ccpp:componentrdf:resource="http://sipt07.si.ehu.es/profiles/2008/access-schema#AT"/> </rdf:Description> <rdf:Descriptionrdf:about="http://www.w3.org/2000/07/04-ccpp#HardwarePlatform"> <access:CpuName>AMD Athlon(tm) XP 2400+</access:CpuName> <access:CpuConstructor>AuthenticAMD</access:CpuConstructor> <access:ramSize>1035172 kB</access:ramSize> <access:display>1024 x 768 pixels</access:display> <access:keyboard>AT Translated Set 2 keyboard</access:keyboard> <access:ColourCapable>Yes</access:ColourCapable> <access:ImageCapable>Yes</access:ImageCapable> <access:SoundOutputCapable>Yes</access:SoundOutputCapable > </rdf:Description> <rdf:Descriptionrdf:about="http://www.w3.org/2000/07/04-ccpp#SoftwarePlatform"> <access:OSName>Linux</access:OSName> <access:OSVendor>Unknown</access:OSVendor> <access:OSVersion>2.6.9-1.667</access:OSVersion> <access:user>root</access:user> <access:JavaVersion>1.4.2_10</access:JavaVersion> <access:JavaVendor>Sun Microsystems Inc.</access:JavaVendor> <access:JavaVendorURL>http://java.sun.com/</access:JavaVendorURL> </rdf:Description> <rdf:Descriptionrdf:about="http://sipt07.si.ehu.es/profiles/2008/access-schema#AT"> <access:ATName>Brltty</access:ATName> <access:ATVendor>The Brltty Team</access:ATVendor> <access:ATVersion>3.6.1</access:ATVersion> <access:ATType>Output</access:ATType> <access:ATIOtype>Braille</access:ATIOtype> </rdf:Description> <rdf:Descriptionrdf:about="http://sipt07.si.ehu.es/profiles/2008/access-schema#AT"> <access:ATName>K magnifier</access:ATName> <access:ATVendor>Kde Access Team</access:ATVendor> <access:ATVersion>1.0.0</access:ATVersion> <access:ATType>Output</access:ATType> <access:ATDescription>KDE Accessibility Magnifier</access:ATDescription> <access:ATIOtype>Magnifier</access:ATIOtype> </rdf:Description> • Automatically obtained CC/PP profile for personal accessibility access namespace hardware features software features assistive technologies • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility • The Guidelines Manager based on the data of the CC/PP profile Device Atlas WURLF UAProf • Goal 1. Only those guidelines that affect to the user are downloaded • Goal 2. Guidelines with positive dependencies are not evaluated • Goal 2. Guidelines with negative dependencies will produce a failure • Goal 3. Guidelines are completed with delivery context data Device Information Retriever Assistive Technologies detector <software features/> <hardware features/> <assistivetechnologies/> CC/PP Profile UGL repository Guidelines Manager • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Guidelines & Profiles • Example: IMAGE_MAPS best practice • UGL is extended with semantic information <access:pntSupport>true</access:pntSupport> CC/PP excerpt 1. Matching • <test_case id="8"> • <evaluation_type>auto</evaluation_type> • <evaluation_result>error</evaluation_result> • <profile_feature type="access:pntSupport"/> • <value> </value> • <element> • <label>OBJECT</label> • <test_elem>check attribute</test_elem> • <related_attribute> • <atb>ismap</atb> • </related_attribute> • </element> • </test_case> UGL excerpt 2. Fill in slots true let $tmp:=web_doc.xml//OBJECT[@ismap] return if(not( ))then for $i in $tmp return <error>{$i/@line, $i/name()}</error> XQuery excerpt true • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Framework for personal accessibility Device Atlas WURLF UAProf • A set of context-tailored evaluation tests are produced • As a result, evaluation report is tailored to context Device Information Retriever Assistive Technologies detector <software features/> <hardware features/> <assistivetechnologies/> CC/PP Profile UGL repository <html> <head> <title>Test file<title> <body>foo Guidelines Manager XQuery1 Context-tailored report XQueryi XQueryn (X)HTML • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
4. ContextualWeb Accessibility Applying metrics • WAQM is strongly tied to WCAG guidelines • A more flexible aggregation method that can be adapted to different interaction contexts is thus applied Traditional aggregation: where W: weights and E: evaluation results Logic Scoring Preferences: where ρ(d) are values selected upon the required logical relationship between evaluation results Successfully applied by Olsina & Rossi (2002) in web application Quality Assurance scenarios • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Summary for Contextual Evaluation • An assessment framework that considers interaction context • How assistive technologies provide access to content and device features are of utmost importance • A metric that adapts to different contextual settings is defined • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
Outline • Motivation • Web Accessibility Evaluation • Web Accessibility Measurement 3.1 Web Accessibility Quantitative Metric 3.2 Deploying accessibility scores in Search Engines • Contextual Web Accessibility Assessment • Device-tailored Assessment • User-tailored Assessment • Conclusions • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
5. Device-Tailored Web Accessibility - developers • Mobile Web Best Practices with different devices • Tool effectiveness • 10 pages were evaluated for different devices • D1<D2<D3 • Device-tailored vs traditional evaluation • Device-tailored evaluation statistically differs • Following Brajnik’s (2004) method for tool effectiveness • False positives of warnings are removed increase in tool completeness • More false negatives of failures are found increase tool correctness • Mobile Web Guidelines are developed in a low specifitylevel • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
5. Device-Tailored Web Accessibility - developers • Device/paradigm behaviour • Logic Scoring Preferences was applied • 5 metrics: Navigation, Layout, Page Definition, Input and Overall • 102 web pages mobile vs desktop • D1<D2 • Higher scores are obtained for pages to be deployed in mobile devices • Better featured devices score higher • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
5. Device-Tailored Web Accessibility - end users • Context: able-bodied users accessing the web with mobile devices • Access device: a PDA • Guidelines: mobileOK tests for mobile web conformance • 20 participants • Task: search by navigating • Usability measures • Effectiveness: completed task rate • Efficiency: task completion time • Satisfaction: Lewis’ after scenario questionnaire • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective
5. Device-Tailored Web Accessibility - end users • Assessment metrics: device-tailored vs non-tailored • Correlation matrix: *:p<.05, **:p<.03, ***:p<.00 • Automatic Assessment of Contextual Web Accessibility from an Evaluation, Measurement and Adaptation Perspective