1 / 13

September 2012 Update September 13, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

September 2012 Update September 13, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench. With Funding Support provided by National Institute of Standards and Technology. Agenda for Today.

oni
Télécharger la présentation

September 2012 Update September 13, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. September 2012 Update September 13, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench With Funding Support provided by National Institute of Standards and Technology

  2. Agenda for Today • Update on statisticalanalysislibrarymodules, includingconceptualdevelopmentofaggregateuncertainty (Jovanna) • Overviewoffunctionality in Reference Data Set Manager staged for thedevelopmentiteration (Patrick) 2 2 2 2 2 2 2

  3. Unifying Goal of 2nd Development Iteration • Perform end-to-end characterization of vCT including meta-analysis of literature, incorporation of QIBA results, and "scaled up" using automated detection and reference volumetry method. • Integrated characterization across QIBA, FDA, LIDC/RIDER, Give-a-scan, Open Science sets (e.g., biopsy cases), through analysis modules and rolling up to an i_ file in zip archive. • Specifically have people like Jovanna, Ganesh, and Adele to use it (as opposed to only Gary, Mike/Patrick, and Kjell) 3 3 3 3 3 3 3 3

  4. Analyze: Update on Library Modules 4 4

  5. Analyze: Validation • Go to Jovanna’s desktop 5 5

  6. Analyze: Aggregate Uncertainty • Objective: comprehensively characterize the performance of an imaging biomarker. • Two orthogonal considerations: • Breadth of data used: use as much data as you can, regardless of where it comes from! • Nature of study designs that result in determination of uncertainty components • Approach: • Utilize common analytical pipeline to place literature and heterogeneous study results onto a common plane (this motivates the file conventions that drive the library design) • Roll-up separate components into an aggregate: current WIP for discussion 6 6

  7. Analyze: Aggregate Uncertainty • Go back to Jovanna’s desktop 7 7

  8. Execute: Basic Plan • Generalize processing framework from the previous development year • Support user-in-the-loop processing workflows for certain data-processing tasks • Refine input and output formats to adhere to newer standards (AIM 4.0, DICOM Segmentation Objects, etc.) 8 8

  9. Execute: Implementation • Support for Radiological Worklists and DICOM Query and Retrieve directly from Execute • Batchmake scripts initiate worklist item delegation and reader stations can retrieve those datasets from Midas as they would a PACS • Generalize processing API harness to allow arbitrary algorithm runs on arbitrary datasets. • Optimize the web API and scripting interface to allow more seamless interaction with other QI-Bench applications 9 9

  10. 10

  11. Value proposition of QI-Bench • Efficiently collect and exploit evidence establishing standards for optimized quantitative imaging: • Users want confidence in the read-outs • Pharma wants to use them as endpoints • Device/SW companies want to market products that produce them without huge costs • Public wants to trust the decisions that they contribute to • By providing a verification framework to develop precompetitive specifications and support test harnesses to curate and utilize reference data • Doing so as an accessible and open resource facilitates collaboration among diverse stakeholders 11

  12. Summary:QI-Bench Contributions • We make it practical to increase the magnitude of data for increased statistical significance. • We provide practical means to grapple with massive data sets. • We address the problem of efficient use of resources to assess limits of generalizability. • We make formal specification accessible to diverse groups of experts that are not skilled or interested in knowledge engineering. • We map both medical as well as technical domain expertise into representations well suited to emerging capabilities of the semantic web. • We enable a mechanism to assess compliance with standards or requirements within specific contexts for use. • We take a “toolbox” approach to statistical analysis. • We provide the capability in a manner which is accessible to varying levels of collaborative models, from individual companies or institutions to larger consortia or public-private partnerships to fully open public access. 12

  13. QI-BenchStructure / Acknowledgements • Prime: BBMSC (Andrew Buckler, Gary Wernsing, Mike Sperling, Matt Ouellette, Kjell Johnson, Jovanna Danagoulian) • Co-Investigators • Kitware (Rick Avila, Patrick Reynolds, JulienJomier, Mike Grauer) • Stanford (David Paik) • Financial support as well as technical content: NIST (Mary Brady, Alden Dima, John Lu) • Collaborators / Colleagues / Idea Contributors • Georgetown (Baris Suzek) • FDA (Nick Petrick, Marios Gavrielides) • UMD (Eliot Siegel, Joe Chen, Ganesh Saiprasad, Yelena Yesha) • Northwestern (Pat Mongkolwat) • UCLA (Grace Kim) • VUmc (Otto Hoekstra) • Industry • Pharma: Novartis (Stefan Baumann), Merck (Richard Baumgartner) • Device/Software: Definiens, Median, Intio, GE, Siemens, Mevis, Claron Technologies, … • Coordinating Programs • RSNA QIBA (e.g., Dan Sullivan, Binsheng Zhao) • Under consideration: CTMM TraIT (Andre Dekker, JeroenBelien) 13

More Related