1 / 32

Getting a Grip on Assistive Technology: Cognitive Testing Disability Questions

Getting a Grip on Assistive Technology: Cognitive Testing Disability Questions. Barbara F. Wilson, Barbara Altman, and Karen Whitaker National Center for Health Statistics Vicki A. Freedman , Jennifer C. Cornman, Lisa Landsberg Polisher Research Institute

gus
Télécharger la présentation

Getting a Grip on Assistive Technology: Cognitive Testing Disability Questions

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Getting a Grip on Assistive Technology: Cognitive Testing Disability Questions Barbara F. Wilson, Barbara Altman, and Karen Whitaker National Center for Health Statistics Vicki A. Freedman, Jennifer C. Cornman, Lisa LandsbergPolisher Research Institute Emily M. AgreeJohns Hopkins Bloomberg School of Public Health

  2. Project Goals To develop a set of instruments for national surveys on health and aging to collect information on assistive technologies (AT) and the environments in which they are used.

  3. Methodology of the Cognitive Testing • Recruited 28 participants with assorted conditions (ALS, Blindness, Diabetes) • Participants used a great variety of assistive technology items (e.g. cane, grab bars, hearing aid, walker, wheelchair, etc.)

  4. Methodology • Cognitive interviews were conducted over the telephone from one room to another in the NCHS Questionnaire Design Research Laboratory (QDRL). • Interviews were videotaped with permission. • Interviewing techniques used were think aloud with probes and debriefing.

  5. Methodology • Three rounds of testing. • Revised instrument after each round.

  6. Selected Findings: • Language and comprehension • Response option scales • Reference periods

  7. Findings about comprehension • Some words were misunderstood, either because they could not be heard over the telephone or because they were unfamiliar (curb cut, corridor, health, pill reminder, stall shower, vision)

  8. Findings about interpretation • People couldn’t decide what constituted “convenient” public transportation if they couldn’t walk. • or “adequate” lighting if they couldn’t see well. • Couldn’t decide whether “How often do you walk around your neighborhood?” was intended to find out if participant socialized or exercised. • One man who uses a wheelchair said, “I roll around my neighborhood.”

  9. Findings about response options • Many participants did not understand that they were being offered five possible response options and should choose one.

  10. Transcript of one question • “You said there is a computer in your home. And how hard or easy is it for you to do the following…is it Very hard, Hard, Not hard or easy, Easy, or Very easy to use the keyboard?” • Shrugs shoulder…“It’s OK.” • “And so you say that it’s ‘Not hard or easy’?” • “It’s Not hard.” • “Is it Easy?” • “Yes.” • “Is it Very easy?” • “It’s Easy. That’s what it is. Easy.”

  11. Findings about response options • Many participants did not understand that “Not hard or easy” was a single option, the third of five. • To solve this, the response options could be read aloud with numbers preceding. • This was a problem over the telephone. In another mode (self administered by computer or paper and pencil), this may not have been a problem.

  12. Response options • Participants did not naturally use the requested metric.

  13. Transcript of second question • “And how hard or easy is it to see the screen?” • “I see it very well.” • “Would you say then it’s Easy to see the screen?” • “Yes.” • “Is it very easy?” • “Very easy.”

  14. Response options • Questionnaire designers intended that stem of question with five response options be carried forward to four follow-up questions. However, questions 2, 3, and 4 abbreviated the scale to just “hard or easy” . Interviewer had to repeat options to arrive at a valid answer.

  15. Transcript of third question • “And how hard or easy is it to control the Pointer?” • “Easy.” • “Very easy?” • “Very easy.”

  16. Transcript of fourth question • And how hard or easy is it to follow the instructions?” • “You mean Email?” • “Yes.” • “I learned it, and it’s no problem now.” • “So it’s Very Easy?” • “Yes.”

  17. Problem with bipolar scales • One woman said that a numbered scale would be easier for her than a 5-point bipolar verbal scale from Very hard to Very easy. • In asking for a numbered scale from one to five she is asking for a unipolar one-directional scale.

  18. Time frames • Cognitive testing showed that it was important for questions to include a reference period (In the past 30 days) for use of assistive technology items. • Without the time frame, people offered long lists of equipment that was no longer used, confounding subsequent questions.

  19. Frequency of AT use • In the first round, frequency of use was asked generically for all AT items, beds, canes, hearing aids. The original question was: “How often do you currently use it …All the time, some of the time, or never?

  20. Frequency of AT use, contd. • When asked about a cane, this question got responses such as “90 percent of the time, except when I sleep, eat or read the paper” or “most of the time”. • Good answers, but not in the requested metric, which was All the time, some of the time, or never.

  21. Frequency of AT use, contd. • When asked about a hospital bed, “How often do you currently use it…All the time, some of the time, never?” • Response: “Am I in bed 24 hours a day?”

  22. Frequency of AT use, contd. • Cognitive testing showed that, in order to make sense, the frequency of using an AT item (e.g. hospital bed) had to be linked to a specific activity (e.g. sleeping).

  23. Questions to develop Measures of Effectiveness • Effectiveness Concepts: • Participation • Safety • Control • Independence • Pain, fatigue, extra time needed to do things

  24. Original Effectiveness Questions • Response options offered Balanced, forced choice, 5-point bipolar semantic differential scale: • “Please tell me whether you strongly disagree, disagree, neither agree nor disagree, agree, or strongly agree with each of the following statements.”

  25. Example: One of 15 Effectiveness questions • “Because I use these items, I have less pain than I used to. Do you Strongly disagree, Disagree, Neither disagree nor agree, Agree, or Strongly agree ?”

  26. Conclusions • Researchers and QDRL staff jointly reviewed tapes from first two rounds. • The project demonstrated that telephone surveys of elderly persons present special problems. • The overly long survey was shortened by dropping unworkable questions. • Synonyms were found for misunderstood terms.

  27. Conclusions, contd. • Purpose of confusing questions was clarified. • Bipolar response option scales were replaced with unipolar scales that included a “Does not apply” option. • Assistive technology questions were limited to 30-day reference period. • Third round went smoothly. • Revised instrument will be pilot tested.

  28. Balanced, forced choice, 5-point bipolar scales • Meaning of the midpoint was unclear. The midpoint might mean that they neither disagreed nor agreed, that they both disagreed and agreed, or that sometimes the statement was true and sometimes false. • Moreover, for some items “Neither agree nor disagree” just did not fit.

  29. Example • “I have less pain than I used to.” • One participant said she never had pain. If she agreed that she had less, then it would seem as if she previously had pain. • If she disagreed that she had less pain, it would sound as if the Assistive Tech did not help.

  30. Problems with forced choice,Disagree/Agree, response scale • There was a hidden assumption that there had been pain. • She wanted the option to say “Does not apply.”

  31. Revised question (Unipolar, 3 point, nonforced choice) • “Because you use these items, how much less painful is it for you to do your daily activities….. • No less, a little less, or a lot less? • Or does it not apply?”

  32. Conclusion • Researchers and QDRL staff reviewed tapes from first two rounds. • Researchers revised, deleted, or fine-tuned the questions. • Third round went more smoothly. • Revised instrument will be pilot tested.

More Related