140 likes | 240 Vues
This project explores the formulation and optimization of computational methods in Quantum Field Theory, focusing on solving the Eigenvalue Problem efficiently. It delves into matrix diagonalization, data structure optimization, parallelization, and program design. The study aims to enhance the scalability and efficiency of eigenvalue problem solutions in quantum field theory.
E N D
Scalable Computational Methods in Quantum Field Theory Jason Slaunwhite Computer Science and Physics Senior Project Advisors: Hemmendinger, Reich, Hiller (UMD)
Outline • Context / Background • Design • Optimization • Compiler • Data Structures • Parallel • Summary
Context (1) QED picture not QCD, particle exchange • Physical Model • Strong Force • Yukawa Theory • Quantum field thoery • Interactions = particle exchanges • Gauge Bosons • Eigenvalue Problem • Common ex: rotation • Form: Z Eigenvector Rotation About z Ax = lx xy-plane Matrix Vector Scalar
Context (2) • Formulation of Eigenvalue Problem • Discrete - Hiller • Basis Function Expansion - Slaunwhite y y = f(x) x Basis Function Expansion y y y = Gn (x) y = Gm (x) + discrete y Ax = lx x x f(x) = a*Gn (x) + b* Gm (x) + … x Ax = lx
Context (3) • Is BFE a good method for solving the eigenvalue problem? • Is it scalable? • Convergence of eigenvalues as w/ increasing # of functions • Time dependence of computational methods convergence
Design (1) Calc Matrix Solve (Diagonalize) Input • What does the program do? • Input Parameters • Calculate each independent matrix elements • Solve (Diagonalize the matrix) • Structure Reflects Mathematics libraries easy
Design (2) Level 1 Calc Matrix Diagonalize (solve) Input Level 2 Integrate Integrate Integrate Level 3 Kernel Kernel Kernel
Review • Quantum Field Theory Model of the strong force • Eigenvalue problem • Programming work: calculate the matrix elements. • How did I optimize it? • Can it run in parallel? Ax = lx Matrix Vector Scalar The program
Optimization - Compiler • g++ -03 • Simple • Adds compile time • Very Effective! - Unoptimized - Optimized
Optimization – Data Structures … • Naïve approach • Storage vs. Time • Precompute values outside of element iteration • Need organized way to index the values Compute library Values Trade-off smart For each row For each col Calculate element Compute library Values (naïve) Naïve …
Optimization Results Key: --naïve --data structure --data structure + compiler Slopes: red/yellow = 2.56 Slopes: yellow/green = 2.28 Slopes: red/green = 5.84
Parallel Design • Matrix elements independent • Split computation across many processors Ax = lx = l
Work in progress - Paralellization • OpenMP libraries • IBM SP – MSI • Slower processors, but more of them and more memory • Work in progress From http://www.ibm.com The IBM SP consists of 96 shared-memory nodes with a total of 376 processors and 616 GB of memory
Summary Parallel? Ax = lx g++ -03 The program