1 / 21

July 2012 Update July 12, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

July 2012 Update July 12, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench. With Funding Support provided by National Institute of Standards and Technology. Agenda for Today. Approach, plans , and progress on Testing Analysis Modules Overview Bias- Linearity Demo

job
Télécharger la présentation

July 2012 Update July 12, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. July 2012 Update July 12, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench With Funding Support provided by National Institute of Standards and Technology

  2. Agenda for Today • Approach, plans, andprogress on Testing • Analysis Modules • Overview • Bias-Linearity Demo • Second developmentiteration 2 2 2 2 2 2 2

  3. Testing: System Under Test • Funtionality perspective: • Specify, Formulate, Execute, Analyze, Package • Range of supported information: • _loc, _dcm, _seg, _chg, and _cov data types 3 3 3 3 3 3 3

  4. 4 4 4 4 4 4 4

  5. 5 5 5 5 5 5 5

  6. 6 6 6 6 6 6 6

  7. 7 7 7 7 7 7 7

  8. 8 8 8 8 8 8 8

  9. 9 9 9 9 9 9 9

  10. 10 10 10 10 10 10 10

  11. Testing: Risk-based, Multiple Scopes • Risk analysis (RA) specifies what level of unit/module, integration, verification, and validation is needed based on application • Validation itself: • Installation Qualification (IQ) • Operational Qualification (OQ) • Performance Qualification (PQ): • capacity • speed • correctness (including curation and computation) • usability • utility 11 11 11 11 11 11 11

  12. Test plans, protocols, and reports 12 12 12 12 12 12 12

  13. Analysis Modules 13 13 13 13 13 13 13

  14. 14 14 14 14 14 14 14

  15. 15 15 15 15 15 15 15

  16. 16 16 16 16 16 16 16

  17. Second developmentiteration: contentandpriorities Theoretical Base Functionality Test Beds Domain Specific Language Executable Specifications Computational Model Enterprise vocabulary / data service registry End-to-end Specify-> Package workflows • Curation pipeline workflows • DICOM: • Segmentation objects • Query/retrieve • Structured Reporting • Worklist for scripted reader studies • Improved query / search tools (including link of Formulate and Execute) • Continued expansion of Analyze tool box Further analysis of 1187/4140, 1C, and other data sets using LSTK and/or use API to other algorithms Support more 3A-like challenges Integration of detection into pipeline Meta-analysis of reported results using Analyze False-positive reduction in lung cancer screening Other biomarkers 17 17 17 17 17 17 17

  18. 18

  19. Value proposition of QI-Bench • Efficiently collect and exploit evidence establishing standards for optimized quantitative imaging: • Users want confidence in the read-outs • Pharma wants to use them as endpoints • Device/SW companies want to market products that produce them without huge costs • Public wants to trust the decisions that they contribute to • By providing a verification framework to develop precompetitive specifications and support test harnesses to curate and utilize reference data • Doing so as an accessible and open resource facilitates collaboration among diverse stakeholders 19

  20. Summary:QI-Bench Contributions • We make it practical to increase the magnitude of data for increased statistical significance. • We provide practical means to grapple with massive data sets. • We address the problem of efficient use of resources to assess limits of generalizability. • We make formal specification accessible to diverse groups of experts that are not skilled or interested in knowledge engineering. • We map both medical as well as technical domain expertise into representations well suited to emerging capabilities of the semantic web. • We enable a mechanism to assess compliance with standards or requirements within specific contexts for use. • We take a “toolbox” approach to statistical analysis. • We provide the capability in a manner which is accessible to varying levels of collaborative models, from individual companies or institutions to larger consortia or public-private partnerships to fully open public access. 20

  21. QI-BenchStructure / Acknowledgements • Prime: BBMSC (Andrew Buckler, Gary Wernsing, Mike Sperling, Matt Ouellette, Kjell Johnson, Jovanna Danagoulian) • Co-Investigators • Kitware (Rick Avila, Patrick Reynolds, JulienJomier, Mike Grauer) • Stanford (David Paik) • Financial support as well as technical content: NIST (Mary Brady, Alden Dima, John Lu) • Collaborators / Colleagues / Idea Contributors • Georgetown (Baris Suzek) • FDA (Nick Petrick, Marios Gavrielides) • UMD (Eliot Siegel, Joe Chen, Ganesh Saiprasad, Yelena Yesha) • Northwestern (Pat Mongkolwat) • UCLA (Grace Kim) • VUmc (Otto Hoekstra) • Industry • Pharma: Novartis (Stefan Baumann), Merck (Richard Baumgartner) • Device/Software: Definiens, Median, Intio, GE, Siemens, Mevis, Claron Technologies, … • Coordinating Programs • RSNA QIBA (e.g., Dan Sullivan, Binsheng Zhao) • Under consideration: CTMM TraIT (Andre Dekker, JeroenBelien) 21

More Related