320 likes | 429 Vues
This presentation discusses the complexities involved in conducting user studies for visualization algorithms. Klaus Mueller and Joachim Giesen, from the MPI and Stony Brook University, highlight the difficulties in testing multiple parameters and value settings, which can lead to overwhelming permutation complexity. Keynotes from Edi Gröller emphasize the necessity of user studies, followed by Tom Ertl’s insights on their importance. The talk suggests leveraging techniques such as conjoint analysis from market research to enhance user study designs, helping to determine statistical significance through pair-wise comparisons.
E N D
Modeling The User Klaus Mueller (and Joachim Giesen, MPI) Klaus Mueller Computer Science Center for Visual Computing Stony Brook University
Dagstuhl 2007 Moments • Traumatizing beginnings: • Edi Gröller: “Kill (Eliminate) the user!”
Dagstuhl 2007 Moments • Traumatizing beginnings: • Edi Gröller: “Kill (Eliminate) the user!” • Regaining hope: • Tom Ertl: “User studies are needed.” • Chuck Hansen: dedicates 1 out of his 4 talks to user studies
Dagstuhl 2007 Moments • Traumatizing beginnings: • Edi Gröller: “Kill (Eliminate) the user!” • Regaining hope: • Tom Ertl: “User studies are needed.” • Chuck Hansen: dedicates 1 out of his 4 talks to user studies • Inspiring thoughts (in unrelated context): • Penny Rheingans: “Improve visualization accuracy by creating a model of the data.”
Overall Tone: • User studies are sorely needed, but hard to do!!
Now, Why Are User Studies So Hard? • Visualization algorithms typically have many parameters, each with many value settings • the sheer permutation complexity can be overwhelming
Now, Why Are User Studies So Hard? • Visualization algorithms typically have many parameters, each with many value settings • the sheer permutation complexity can be overwhelming • Testing them all on one user may actually lead to his death
Now, Why Are User Studies So Hard? • Visualization algorithms typically have many parameters, each with many value settings • the sheer permutation complexity can be overwhelming • Testing them all on one user may actually lead to his death • and we need to perform these tests with many users
Now, Why Are User Studies So Hard? • Visualization algorithms typically have many parameters, each with many value settings • the sheer permutation complexity can be overwhelming • Testing them all on one user may actually lead to his death • and we need to perform these tests with many users • So… Mission Accomplished?
Well… • Let’s have a closer look…
For Example: Volume Rendering • Some rather trivial parameters: • rendering algorithm (X-ray, MIP, US-DVR, S-DVR, GW-DVR) • ray step size (continuous scale) • resolution • background • colormap (color transfer function) • viewpoint
For Example: Volume Rendering • Some rather trivial parameters: • rendering algorithm (X-ray, MIP, US-DVR, S-DVR, GW-DVR) • ray step size (continuous scale) • resolution • background • colormap (color transfer function) • viewpoint • Some more complex ones: • rendering style (various illustrative rendering schemes) • (magic) lenses • (magic) shadows • advanced BRDFs and ray modeling • etc…
Sample Testing Scenario (1) • Which colormap shows more detail?
Sample Testing Scenario (2) • Which colormap shows more detail?
Parameter Test Complexity • Notice: • all renderings show all features • all renderings use the same window size • Variables: • 3 colormaps • 5 rendering modes • 6 viewpoints • 2 image resolutions • 3 ray step sizes • 5 backgrounds 2700 permutations 7M pair-wise comparisons
Daunting, But Not Unusual… • Market research deals with these problems on a regular basis • have attributes (parameters) and levels (values)
Daunting, But Not Unusual… • Market research deals with these problems on a regular basis • have attributes (parameters) and levels (values) • For example, consider the design of a new car model, optimizing the following parameters: • comfort and convenience • quality • styling • performance
Daunting, But Not Unusual… • Market research deals with these problems on a regular basis • have attributes (parameters) and levels (values) • For example, consider the design of a new car model, optimizing the following parameters: • comfort and convenience • quality • styling • performance • Sounds familiar?
How Can This Help Us? • A common technique used in market research is conjoint analysis • Conjoint analysis allows one to: • interview a modest number of people • with a modest number of pair-wise comparison tests • The tests simulate real buying situations and statistical significance can be determined
How Can This Help Us? • A common technique used in market research is conjoint analysis • Conjoint analysis allows one to: • interview a modest number of people • with a modest number of pair-wise comparison tests • The tests simulate real buying situations and statistical significance can be determined • We have actually done this: • 786 respondents • 20 pair-wise tests each
How Can This Help Us? • A common technique used in market research is conjoint analysis • Conjoint analysis allows one to: • interview a modest number of people • with a modest number of pair-wise comparison tests • The tests simulate real buying situations and statistical significance can be determined • We have actually done this: • 786 respondents • 20 pair-wise tests each • And the results make sense
Results • Top 10 (detail / aesthetics):
Results • Top 10 (detail / aesthetics): • Flop 10 (detail / aesthetics):
Method • We apply Thurstone’s Method of Comparative Judgment to each attribute separately • isolate attributes in the choice tasks • determine relative rankings of attribute levels using the frequency a level was chosen over another • assume normal distributed rankings (and their differences) • Conjoint structure requires a modification of Thurstone’s method • the rankings of the various attributes must be transformed into a comparable scale • the transformation factor marks the relative influence of this attribute on the overall visualization experience
What Does This Enable? • Efficient testing of multi-parameter scenarios in visualization
What Does This Enable? • Efficient testing of multi-parameter scenarios in visualization • Personalization of visualization experiences for specific users (or user groups)
What Does This Enable? • Efficient testing of multi-parameter scenarios in visualization • Personalization of visualization experiences for specific users (or user groups) • Learning of user preferences given specific task and rendering scenario descriptions
What Does This Enable? • Efficient testing of multi-parameter scenarios in visualization • Personalization of visualization experiences for specific users (or user groups) • Learning of user preferences given specific task and rendering scenario descriptions • Constructing a model of the user to optimize his/her visualization experiences and efficiency
Acknowledgments • Lujin Wang (Stony Brook University) • for rendering 5000+ images • Eva Schuberth (ETH Zürich) • for contributing on the statistics • Peter Zolliker (EMPA Dübendorf) • for contributing on perceptional issues and stats
Questions? • Which image do you like best? • Which image shows more detail?