1 / 10

cosmological computing

cosmological computing. computing is central to all major science components of the Center. analyses of large data sets (e.g., CMB data, SDSS, DES) data modeling (air shower simulations, mock galaxy catalogs, Monte Carlo simulations of experiments, etc.)

yamal
Télécharger la présentation

cosmological computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. cosmological computing computing is central to all major science components of the Center • analyses of large data sets (e.g., CMB data, SDSS, DES) • data modeling (air shower simulations, mock galaxy catalogs, Monte Carlo simulations of experiments, etc.) • theoretical modeling using numerical simulations • E&O (visualizations)

  2. some recent history • a strong internal demand for High-Performance Computing (HPC) has developed within KICP both from the individual groups within MRCs (Particle Astrophysics, Backgrounds, Theory) and generally from KICP fellows and graduate students. HPC issue has come up during fellow recruiting. • persistent recommendations to invest in HPC resources from the External Advisory Board (2005, 2006) This spurred the KICP Computing Initiative, which resulted in expansion of local, small-scale computing resources, and building of the FNAL/KICP computing cluster powerful workstations and small-size storage servers small clusters (cosmic rays, mandor) FNAL/KICP cluster

  3. FNAL/KICP cluster fulla.fnal.gov • joint investment by FNAL and KICP (63% FNAL, 37% KICP). Housed and maintained by Fermilab. • 155 nodes (1240 CPU cores), 2.6Tb of RAM, 100Tb of scratch/tmp storage, 97Tb of long-term storage • Cluster has now successfully operated for two years and was expanded. • ~3.5 million CPU hours (=400 years) used so far for cosmic ray research, shower modeling, galaxy formation simulations, MCMCs, structure formation simulations in alternative gravity models FNAL/KICP computing cluster housed at Fermilab

  4. Examples of research enabled by the cluster the cluster is actively used by KICP graduate students and fellows, often for their own independent projects simulation of a particle shower for VERITAS First simulations of structure formation in alternative gravity models Simulated propagation of photon through Intergalactic space Galaxy formation simulations Modeling TeV electron propagation and instrument response for the CREST experiment

  5. Looking into the future… Why now? compelling science for which high-performance computing is critical: dark universe: - testing CDM as a structure formation paradigm - calibration of nonlinear and baryonic effects for weak lensing - modeling physics of galaxy clusters - simulations of modified gravity models - prediction of dark matter distribution for indirect & direct dark matter detection - modeling of experimental data, mock catalogs for new surveys (e.g, DES), large data storage and analyses - interpretation of observational results (MCMCs, parameter ` estimation) high-energy universe: - simulation of propagation of cosmic rays and high-energy photons in space - modeling air showers - detector simulations

  6. Looking into the future… Why a center? • Computing is a natural cross-cutting component that can be used by nearly all researchers in the Center. It is one of the factors that can make the sum bigger than its parts and to strengthen ties to other parts of U.Chicago and Fermilab and Argonne. • Strong, flexible computing environment stimulates creativity and independent research by students and postdoctoral fellows and teaches them the HPC skills indispensable for their future career. • Such component can be a magnet for new talented students and postdocs, as well as for visitors to the Center who could come and collaborate or learn how to analyze data or simulations. It can form a basis for workshops and summer schools. • We have unique expertise and are well-positioned for making rapid progress. This can give us advantage against our competition (main = KIPAC and Berkeley, perhaps also Harvard, Santa Cruz, OSU, Michigan, NYU).

  7. Advantages Why a center at the University of Chicago? Strong local expertise.investing in computing can actually provide interesting new avenues for “reinventing ourselves” for the re-proposal by exploiting current scientific trends and the tremendous local expertise in scientific high-performance computing. experiments, computing division surveys/data handling Computational Institute, Petascale Active Data Store (PADS) Parallel computing, visualizations, Argonne Leadership Computing Facility

  8. Chicagoland U.Chicago PFC/KICP Fermilab (HPC, DES, +) workshops, summer schools dark universe high-energy universe Computational Institute computing E&O inflationary universe Argonne Supercomputing Centers Visitors

  9. Computing as a PFC component: what does it involve? there is a range of options depending on the level of support that PFC would allocate to this component • small-scale: hardware resources for local use: e.g, continuing investment into and maintenance of the FNAL/KICP cluster + development of local storage and computing hardware for smaller scale computing tailored to specific research needs (requires investment in hardware, but no personnel support) • medium-scale: archiving and providing access to data sets and simulation data to wider community and providing support for their analyses (requires hardware investment and a part-time HPC professional) • large-scale: developing, supporting, and hosting non-trivial computing codes, state-of-the art tera- and peta-scale simulations, arranging data storage and effective access, fostering collaborative computationally-intensive research and related visits with people outside the Center (requires hardware investment and one or more full-time HPC professionals)

  10. Summary & concluding remarks • Computing was not planned as a major component of the current PFC, but has emerged as a major aspect of it in the last several years due to strong demand on the ground in all major research components of the Center. • Investment into computing was strongly recommended by the EAB. • Why now and here? Most of the science we discussed today is computationally intensive and will demand high-performance computing resources and expertise. We are extremely well-positioned to become the center for such science. • Computing as a natural cross-cutting component can help us to develop a fresh and compelling vision for the new Center.

More Related