1 / 33

Neural Network Segmentation and Validation

Neural Network Segmentation and Validation. Nicole M. Grosland Vincent A. Magnotta. Objective. To develop tools to automate bony structure mesh definitions suitable for patient-specific finite element contact analyses.

ova
Télécharger la présentation

Neural Network Segmentation and Validation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neural Network Segmentation and Validation Nicole M. Grosland Vincent A. Magnotta

  2. Objective • To develop tools to automate bony structure mesh definitions suitable for patient-specific finite element contact analyses. • Further, automate the identification of the structures of the upper extremity (including hand/fingers, wrist, elbow and shoulder) using a neural network.

  3. Specific Aims • Aim 1: Integrate and enhance a set of novel and robust hexahedral mesh generation algorithms into the NA-MIC toolkit. • Aim 2: Further automate these modeling capabilities by developing tools for automated image region identification via neural networks. • Aim 3: Validate geometry of models using cadaveric specimens and three-dimensional surface scans

  4. Imaging Protocol • 15 cadaveric specimens were acquired and imaged • CT images, Siemens Sensation 64 CT scanner (matrix = 512x512, KVP = 120). • 0.34-mm in-plane resolution • 0.4 mm slice thickness • MR images: Siemens 3T Trio scanner • PD weighted images – 2D FSE • TE=12ms, TR=7060ms • Resolution=0.5x0.5mm • Slice Thickness = 1.0mm • Matrix=512x512 • T1 weighted images – 3D MP-RAGE • TE=3.35ms, TR=2530ms, TI=1100ms • Resolution=0.6x0.6x0.5mm • Matrix=384x384x96 • Post-processing via BRAINS2 • Spatially normalized • Resampled to 0.2-mm3 voxels

  5. Manual Segmentation • Two trained technicians (Tracer1 and Tracer2) manually traced twenty-one phalanx bones (index) • the distal, middle, and proximal bones • Relative overlap: • Records maintained of tracing times

  6. Probability Map Values Spherical Coordinates Gradient Values Area Iris Values Input Vector: {PS1, PS2, Sα, Sβ, Sγ, G-4, … G4, A1, … A12} Mask Values Output Vector: {MS1, MS2} Neural Network Data

  7. Input Layer Hidden Layer Output Layer test Calculated Error Backpropagation Neural Network Configuration

  8. Neural Network Training • 10 subjects used to train the neural network • Subjects all registered to atlas dataset • Manual segmentations used to define probability information • 200,000 input vectors x 250 iterations • 5 subjects used to evaluate validity and reliability of network

  9. 3D Laser Scanner • 3D Laser scanners have been used for rapid prototyping and to non- destructively image ancient artifacts • Roland LPX-250 Scanner Obtained • Planar and rotary scanning modes • 0.008 inch resolution in planar mode • Objects up to 10 inches wide and 16 inches tall can be scanned • Reverse modeling software tools

  10. LPX-250 Laser Scanner

  11. Finger Dissection • Phalanx and metacarpal bones removed • Care taken to avoid tool marks on the bones • De-fleshing process outlined by Donahue et al (2002) was utilized • Bones allowed to soak in a 5.25% sodium hypochlorite (bleach) solution for 6 hours • Degreased via a soapy water solution • Thin layer of white primer was used to coat the bony surfaces

  12. CA05042125L MD05010306R MD05042226L SC05030303R

  13. Registration of Surfaces • Surface scans origin shifted to center of mass and reoriented to have the same orientation as the CT data • Surfaces registered using a rigid iterative closest point algorithm • Compute Euclidean distance between the surfaces

  14. a b c d Specimen CA05042125L Manual (red) and ANN (blue) ROI definitions

  15. Manual Segmentation • Relative overlap (Tracer1 and Tracer2) • 0.89 for the three bones. • Individual bones • Proximal – 0.91 • Middle – 0.90 • Distal bones – 0.87 • The average time required to manually segment the bones of the index finger was 50.9 minutes, ranging from 39 to 63 minutes.

  16. ANN Results Compared to Manual Rater Relative Overlap of Manual and Neural Network Segmentation

  17. Example Distance Maps ANN output & 3D physical surface scans

  18. ANN Validation ANN output & 3D physical surface scans

  19. Conclusion • Neural networks provide a promising automated segmentation tool for identifying bony regions of interest • Output was compared to both manual raters and 3D surface scanning • Error was less than the size of 1 voxel • Use of 3D surface scanning provides a means to have a true gold standard for evaluation of automated segmentation algorithms

  20. Acknowledgements • Grant funding • R21 (EB001501) • R01 (EB005973) • Stephanie Powell, Nicole Kallemeyn, Nicole DeVries, Esther Gassman

  21. Validation • Aim 3:Model Validation: Cadaveric specimens will be used (i) to generate three-dimensional surface scans with which surfaces defined both manually and via the automated neural network will be compared and (ii) to directly validate the computational models developed via the automated meshing algorithms.

  22. Validation • True “gold-standard” often very difficult to achieve • Brain imaging often have to live with manual raters • Established guidelines based on anatomical experts • Are there better “gold-standards” for other regions of the body?

  23. Orthopaedic Imaging • Ideas developed out of goal to automate the definition of bony regions of interest. • How can we validate these automated tools? • Orthopaedic applications: Would it be possible to dissect cadaveric specimens? • Use bony specimen as the “gold-standard”

  24. Proximal Middle Distal Surface Comparison • Physical surface scan (white) • Manually segmented surface (blue)

  25. Manual surface definitions with various degrees of smoothing • Unsmoothed, • Image-based smoothing, & • Laplacian surface-based smoothing. a b c

  26. Average Euclidean distance and standard deviation between the manually traced unsmoothed surfaces and the physical surface scans.

  27. Average Euclidean distance and standard deviation between the surfaces generated via image-based smoothing and the physical surface scans.

  28. Average Euclidean distance and standard deviations between the surfaces generated via Laplacian surface-based smoothing and the physical surfaces. Average Euclidean distance and standard deviations between the surfaces generated via Laplacian surface-based smoothing and the physical surfaces.

  29. a b c

  30. Neural Networks • A computing paradigm that is designed to modeled how the brain processes data • The network consists of several interconnected neurons that process that the input information through and activation function to form an output • What information can be used to segment regions of interest from images

More Related