1 / 45

Application of High Performance Computing to Situation Awareness Simulations

Application of High Performance Computing to Situation Awareness Simulations. Application of High Performance Computing to Near-Real Time Simulations. Amit Majumdar Group Leader, Scientific Computing, San Diego Supercomputer Center Associate Professor, Dept of Radiation Oncology

pooky
Télécharger la présentation

Application of High Performance Computing to Situation Awareness Simulations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Application of High Performance Computing to Situation Awareness Simulations Application of High Performance Computing to Near-Real Time Simulations Amit Majumdar Group Leader, Scientific Computing, San Diego Supercomputer Center Associate Professor, Dept of Radiation Oncology University of California San Diego

  2. Outline • Academic High Performance Computing • Applications • Event-driven Science • Online Adaptive Cancer Radiotherapy • Dynamic Data Driven Image-guided Neurosurgery • Summary

  3. Academic High Performance Computing

  4. TeraGrid • NSF – National Science Foundation funds TeraGrid • TeraGrid – NSF funded supercomputer centers in US – high BW connection • Teraflop (TF) – 1012floating point operations/sec to Petaflop (PF) – 1015floating point operations/sec range HPC machines 11 Resource Providers, One Facility

  5. Buffalo Wisc Cornell Utah Iowa Caltech USC-ISI UNC-RENCI NSF - TeraGrid TeraGrid is a facility that integrates computational, information, and analysis resources at the San Diego Supercomputer Center, the Texas Advanced Computing Center, the University of Chicago / Argonne National Laboratory, the National Center for Supercomputing Applications, Purdue University, Indiana University, Oak Ridge National Laboratory, the Pittsburgh Supercomputing Center, LSU, and the National Center for Atmospheric Research. UC/ANL PU NCAR PSC IU NCSA ORNL SDSC LSU TACC

  6. 5 top Top500 HPC Machines Top 5 November, 2009 Top 5 November, 2008

  7. NSF HPC Perspective – Tflop - Pflop • Track2 awards: • Two plus one – 3 awards • Track2-A/B: 30M$ for machine plus ~8-10M$/year operating cost - ~500 TF – 1PF range (peak) • Ranger at TACC, U Texas (579 TF, ~62K cores) • Kraken at NICS, ORNL (1 PF, ~99K cores) • Track2-D: Three different machines : Data intensive, Experimental, Grid research • Other awards for Visualization and Data systems • Track1 award: • One award – ~200M$ • Multi PF system with sustained PF performance on scientific applications

  8. Event-drive Science

  9. On-demand Earthquake-induced Ground Wave Simulation • http://shakemovie.caltech.edu/ • Prof Jeroen Tromp (at Caltech when we collaborated, currently at Princeton) • Caltech’s near real time simulation of southern California seismic events using SPECFEM3D software • Simulates SoCal seismic wave propagation based upon spectral element method (SEM) – a parallel MPI code • The movies illustrate the up (red) and down (blue) velocity of Earth’s surface

  10. Events • Every time an earthquake of magnitude > 3.5 occurs in SoCal, 1000s of seismograms record at 100s of seismic stations • epicenter, depth, intensity • Automatically collect these seismic recordings from the SCSN via internet • Subsequently simulate the seismic waves generated by the earthquake in a 3-D southern CA seismic velocity model using SCSN data • After full 3-D wave simulation collect the surface motion data (disp, vel, accl) and map on top of the topography • Render the data and generate movies • Earthquake movies approved by a geophysicist at Caltech • Movies are published – within ~45 mins of earthquake

  11. On-demand HPC • Earthquake can happen anytime • On-demand HPC resources needed for fast simulation • Code uses 144 cores (Intel Woodcrest dual-socket dual-core, 2.3 Ghz nodes) to complete simulations in about 20 mins • HPC resources setup at SDSC – called Ondemand HPC • This has special queue where Caltech shakemovie jobs can come in anytime automatically • Batch software will kill other jobs to guarantee this job gets resources • Results sent back to Caltech – all with no human intervention

  12. Shake Movies • Implications • Emergency preparedness/response • Tsunami warning • Work is being extended to do global simulation • Event: Sun Apr 11, 2010, 16:42:07; Lat:32.5285: Long: -115:3433

  13. Online Adaptive Cancer Therapy http://radonc.ucsd.edu/Research/CART

  14. Conventional Radiotherapy • Treatment simulation • Build a virtual patient model • Treatment planning • Perform virtual treatment using virtual machine on virtual patient • Treatment delivery • Same treatment is repeated for many fractions • Basic assumption: human body is a static system Repeat Days Days Simulation Planning Treatment

  15. Human Body Is A Dynamic System Week 1 Week 3 Tumor Van de Bunt et al. ‘06 • Tumor volume shrinkage in response to the treatment • Tumor shape deformation due to filling state change of neighboring organs • Relative position change between tumor and normal organs

  16. Consequence of Patient Anatomical Variation • An optimal treatment plan may become less optimal or not optimal at all • Dose to tumor ↓ • Dose to normal tissues ↑ • Dose to tumor ↓ → Tumor control ↓ • Dose to normal tissues ↑ → Toxicity ↑ • Toxicity ↑ →Prescribed tumor dose ↓→ Tumor control ↓

  17. Solution • Develop a new treatment plan that is optimal to patient’s new geometry • Adaptive radiation therapy (ART)

  18. Online ART • On-board volumetric imaging has recently become available • Major technical obstacle for clinical realization of online ART • Real-time re-planning • Imaging dose • Clinical workflow Repeat Days Days 5-8 min Simulation Planning On-board Imaging Re-planning Treatment

  19. Our Solution to Real-time Re planning ProblemDevelopment of GPU-based computational tools

  20. SCORE: Supercomputing On-line Re-planning Environment • Project Goal • To develop real-time re-planning tools based on GPUs • Funded by a UC Lab Research Grant • A collaboration with SDSC and Lawrence Livermore National Laboratory

  21. Online Re-planning Process Planning CT w/ Contours Deformable Image Regis CBCT Reconstruction Deformed pCT and Contours Treatment Planning System Dose Calculation Beam Setup Initial Plan Dose Deposition Coefficients Dose Distribution Plan Re-optimization New Plan

  22. Development of GPU-based Real-time Deformable Image Registration Gu et al Phys Med Biol 55(1): 207-219, 2010

  23. Deformable Image Registration • Morphing one image into another with correct correspondence 23

  24. Deformable Image Registration with ‘Demons’ Gu et al Phys Med Biol 55(1): 207-219, 2010 24

  25. Results for GPU-based Demons Algorithms 3D spatial error (mm) / GPU time (s), image size 256×256×100 ~100x speedup compared to an Intel Xeon 2.27 GHz CPU

  26. Development of GPU-based Real-time Dose Calculation Gu et al Phys Med Biol 54(20)6287-97, 2009 Jia et al Phys Med Biol 2010 (in print)

  27. Finite-size Pencil Beam (FSPB) Model

  28. Results for GPU-based FSPB Algorithm ~400x speedup compared to an Intel Xeon 2.27 GHz CPU < 1 sec for a 9-field prostate IMRT plan

  29. Monte Carlo Dose Calculation on GPU Start Transfer data to GPU including random # seeds, cross sections, and pre-generated e- tracks etc. • Directly map DPM code on GPU • Treat a GPU card as a CPU cluster a). Clean local counter b). Simulate one MC history on thread #1 c). Put dose to global counter a). Clean local counter b). Simulate one MC history on thread #1 c). Put dose to global counter a). Clean local counter b). Simulate one MC history on thread #1 c). Put dose to global counter …… Yes No Reach a preset # of histories ? Transfer data from GPU to CPU End

  30. Results for GPU-based MC Dose Calculation ~5x speedup compared to an Intel Xeon 2.27 GHz CPU < 3 min for 1% sigma for photon beams

  31. Development of GPU-based Real-time Plan Re-optimization Men et al Phys Med Biol 54(21):6565-6573, 2009 Men et al Phys Med Biol 2010 (under review) Men et al Med Phys 2010 (to be submitted)

  32. Results of Real-time Re-planning • We have developed GPU-based computational tools for real-time treatment re-planning • For a typical 9-field prostate case • The deformable registration can be done in 7 seconds • The dose calculation takes less than 2 seconds • The plan re-optimization takes less than 1 second (FMO), 2 seconds (DAP), or 30 seconds (VMAT) • A new plan can be developed in about 10-40 seconds • Online ART may substantially improve local tumor control while reducing normal tissue complications • Tools can be used to solve other radiotherapy problems

  33. Dynamic Data Driven Image-guided Neurosurgery A Majumdar1, A Birnbaum1, D Choi1, A Trivedi2, S. K. Warfield3, K. Baldridge1, and Petr Krysl2 1 San Diego Supercomputer Center University of California San Diego 2 Structural Engineering Dept University of California San Diego 3 Computational Radiology Lab Brigham and Women’s HospitalHarvard Medical School Grants: NSF: ITR 0427183,0426558; NIH:P41 RR13218, P01 CA67165, LM0078651, I3 grant (IBM)

  34. Neurosurgery Challenge • Challenges : • Remove as much tumor tissue as possible • Minimize the removal of healthy tissue • Avoid the disruption of critical anatomical structures • Know when to stop the resection process • Compounded by the intra-operative brain shape deformation that happens as a result of the surgical process – preoperative plan diminishes • Important to be able to quantify and correct for these deformations while surgery is in progress by dynamically updating pre-operative images in a way that allows surgeons to react to these changing conditions • The simulation pipeline must meet the real-time constraints of neurosurgery – provide images approx. once/hour within few minutes during surgery lasting 6 to 8 hours

  35. Intraoperative MRI Scanner at BWH

  36. Brain Shape Deformation Before surgery After surgery

  37. Example of visualization: Intra-op Brain Tumor with Pre-op fMRI

  38. Preoperative Data Acquisition Segmentation and Visualization Preoperative Planning of Surgical Trajectory Preoperative data Intraoperative MRI Segmentation Registration Solve biomechanical Model for volumetric deformation Surface matching Visualization Surgical process Overall Process • Before image guided neurosurgery • During image guided neurosurgery

  39. Timing During Surgery Time (min) 0 20 30 10 40 Before surgery Duringsurgery Preopsegmentation IntraopMRI Segmentation Registration Surfacedisplacement Biomechsimulation Visualization Surgicalprogress

  40. Pre and Intra-op 3D MRI (once/hr) Segmentation, Registration, Surface Matching for BC Intra-op surgical decision and steer Once every hour or twofor a 6 or 8 hour surgery Local computer at BWH Merge pre and intra-op viz Crude linear elastic FEM solution Current Prototype DDDAS Inside Hospital

  41. Two Research Aspects • Grid Architecture – grid scheduling, on demand remote access to multi-teraflop machines, data transfer • Data transfer from BWH to SDSC, solution of detail advanced biomechanical model, transfer of results back to BWH for visualization need to be performed in a few minutes • Development of detailed advanced non-linear scalable viscoelastic biomechanical model • To capture detail intraoperative brain deformation

  42. End-to-end Timing of RTBM • Timing of transferring ~20 MB files from BWH to SDSC, running simulations on 16 nodes (32 procs), transferring files back to BWH = 9* + (60** + 7***) + 50* = 124 sec. • This shows that the grid infrastructure can provide biomechanical brain deformation simulation solutions (using the linear elastic model) to surgery rooms at BWH within ~ 2 mins using TG machines • This satisfies the tight time constraint set by the neurosurgeons

  43. Current and New Biomechanical Model • Current linear elastic material model – RTBM • Advanced model under development - FAMULS • Advanced model is based on conforming adaptive refinement method – FAMULS package (AMR) • Inspired by the theory of wavelets this refinement produces globally compatible meshes by construction • First task was to replicate the linear elastic result produced by the RTBM code using FAMULS

  44. Advanced Biomechanical Model • The current solver is based on small strain isotropic elastic principle • The new biomechanical model will be inhomogeneous scalable non-linear viscoelastic model with AMR • We also want to increase resolution close to the level of MRI voxels i.e. millions of FEM meshes • Since this complex model still has to meet the real time constraint of neurosurgery it requires fast access to remote multi-tflop systems

  45. Summary • HPC resources can enable near real time simulations for various scientific, engineering, and medical applications • The architecture has to plan • what are the right HPC resources • how to access the HPC resources • deal with data transfer etc. • Overall this can facilitate • Natural or man-made event-driven rapid response and preparedness • Adaptive simulations to provide new capability • Dynamic data driven simulations to enhance quality

More Related