1 / 25

Program Update December 13, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

Program Update December 13, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench. With Funding Support provided by National Institute of Standards and Technology. Agenda. Enterprise Architecture: Requirements overview Background for non-software engineering professionals

Télécharger la présentation

Program Update December 13, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Program Update December 13, 2012 Andrew J. Buckler, MS Principal Investigator, QI-Bench With Funding Support provided by National Institute of Standards and Technology

  2. Agenda • Enterprise Architecture: • Requirements overview • Background for non-software engineering professionals • Enterprise architecture modeling • Analysis library: • Current status and active extensions in progress • Drill-down on segmentation analysis activities • Update on workflow enginefor the Compute Services : • First demonstration of Kepler, using the segmentation analysis as example 2

  3. Requirements Overview

  4. Background for non-software engineering professionals • MVC – Model, View, Controller • Design Patterns • Frameworks 4

  5. Background for non-software engineering professionals: Model – View – Controller View • Model: Represents the state of what we are doing and how we think about it • View: How we perceive and seem to manipulate the model • Controller: mediator between the Model and View Controller Model 5

  6. Background for non-software engineering professionals: Design Patterns 6

  7. Background for non-software engineering professionals: Frameworks 7

  8. 8

  9. Most familiar: Data Services Same as what we’ve been doing with the Reference Data Set Manager in Midas and the ISA files, but extended with a data virtualization layer for federated and heterogeneous storage 9

  10. Also familiar: Compute Services More on this later in the agenda 10

  11. Less familiar to some, but foundational to the full vision: The Blackboard Specify already has an early version of this, but substantial modification is planned 11

  12. Interfacing to existing ecosystem: Workstations In short: the primary way of working for clinical users is extended rather than changed. 12

  13. Internal components within QI-Bench to make it work: Controller and Model Layers Largely hidden from users, but supporting the various use cases. 13

  14. Internal components within QI-Bench to make it work: QI-Bench REST To support QI-Bench GUI as well as external systems, notably workstations. 14

  15. Last but not least: QI-Bench Web GUI These are the views that we’ve been evolving since early in the program. 15

  16. (Go to latest top level GUI concept demo) 16 16

  17. Compute Services: Objects for the Analyze Library • Capabilities to analyze literature, to extract • Reported technical performance • Covariates commonly measured in clinical trials • Capability to analyze data to • Characterize image dataset quality • Characterize datasets of statistical outliers. • Capability to analyze technical performance of datasets to, e.g. • Characterize effects due to scanner settings, sequence, geography, reader, • scanner model, site, and patient status. • Quantify sources of error and variability • Characterize intra- and inter-reader variability in the reading process. • Evaluate image segmentation algorithms. • Capability to analyze clinical performance, e.g. • response analysis in clinical trials. • analyze relative effectiveness of response criteria and/or read paradigms. • overcome metric‘s limitations and add complementarity • establish biomarker characteristics and/or value as a surrogate endpoint. In Place In Progress In Queue 17

  18. Analyze Library: Coding View • Core Analysis Modules: • AnalyzeBiasAndLinearity • PerformBlandAltmanAndCCC • ModelLinearMixedEffects • ComputeAggregateUncertainty • Meta-analysis Extraction Modules: • CalculateReadingsFromMeanStdev (written in MATLAB to generate synthetic Data) • CalculateReadingsFromStatistics (written in R to generate synthetic data. • Inputs are number of readings, mean, standard deviation, inter- and intra-reader correlation coefficients). • CalculateReadingsAnalytically • Utility Functions: • PlotBlandAltman • GapBarplot • Blscatterplotfn 18

  19. Drill-down on segmentation analysis activities 19

  20. Update on Workflow Engine for the Compute Services • allows users to create their own workflows and facilitates sharing and re-using of workflows. • has a good interface for capture of the provenance of data. • ability to work across different platforms (Linux, OSX, and Windows). • easy access to a geographically distributed set of data repositories, computing resources, and workflow libraries. • robust graphical interface. • can operate on data stored in a variety of formats, locally and over the internet (APIs, Web RESTful interfaces, SOAP, etc…). • directly interfaces to R, MATLAB, ImageJ, (or other viewers). • ability to create new components or wrap existing components from other programs (e.g., C programs) for use within the workflow. • provides extensive documentation. • grid-based approaches to distributed computation. Supported by Taverna Could also be done in Taverna, but already supported in Kepler 20

  21. Taverna and Kepler… Two powerful suites for workflow management. However, Kepler improves Taverna by: • grid-based approaches to distributed computation. • directly interfaces to MATLAB, ImageJ, (or other viewers). • ability to wrap existing components from other programs (e.g., C programs) for use within the workflow. • provides extensive documentation. …go to demo 21

  22. 22

  23. Value proposition of QI-Bench • Efficiently collect and exploit evidence establishing standards for optimized quantitative imaging: • Users want confidence in the read-outs • Pharma wants to use them as endpoints • Device/SW companies want to market products that produce them without huge costs • Public wants to trust the decisions that they contribute to • By providing a verification framework to develop precompetitive specifications and support test harnesses to curate and utilize reference data • Doing so as an accessible and open resource facilitates collaboration among diverse stakeholders 23

  24. Summary:QI-Bench Contributions • We make it practical to increase the magnitude of data for increased statistical significance. • We provide practical means to grapple with massive data sets. • We address the problem of efficient use of resources to assess limits of generalizability. • We make formal specification accessible to diverse groups of experts that are not skilled or interested in knowledge engineering. • We map both medical as well as technical domain expertise into representations well suited to emerging capabilities of the semantic web. • We enable a mechanism to assess compliance with standards or requirements within specific contexts for use. • We take a “toolbox” approach to statistical analysis. • We provide the capability in a manner which is accessible to varying levels of collaborative models, from individual companies or institutions to larger consortia or public-private partnerships to fully open public access. 24

  25. QI-BenchStructure / Acknowledgements • Prime: BBMSC (Andrew Buckler, Gary Wernsing, Mike Sperling, Matt Ouellette, Kjell Johnson, Jovanna Danagoulian) • Co-Investigators • Kitware (Rick Avila, Patrick Reynolds, JulienJomier, Mike Grauer) • Stanford (David Paik) • Financial support as well as technical content: NIST (Mary Brady, Alden Dima, John Lu) • Collaborators / Colleagues / Idea Contributors • Georgetown (Baris Suzek) • FDA (Nick Petrick, Marios Gavrielides) • UMD (Eliot Siegel, Joe Chen, Ganesh Saiprasad, Yelena Yesha) • Northwestern (Pat Mongkolwat) • UCLA (Grace Kim) • VUmc (Otto Hoekstra) • Industry • Pharma: Novartis (Stefan Baumann), Merck (Richard Baumgartner) • Device/Software: Definiens, Median, Intio, GE, Siemens, Mevis, Claron Technologies, … • Coordinating Programs • RSNA QIBA (e.g., Dan Sullivan, Binsheng Zhao) • Under consideration: CTMM TraIT (Andre Dekker, JeroenBelien) 25

More Related