1 / 26

Whither turbulence computations? by K.R. Sreenivasan New York University A commentary on the work of Victor Yakhot Bos

Whither turbulence computations? by K.R. Sreenivasan New York University A commentary on the work of Victor Yakhot Boston University J ö rg Schumacher University of Ilmenau P.K. Yeung Georgia Institute of Technology Diego Donzis Texas A&M perhaps others Indian Institute of Science

tiffany
Télécharger la présentation

Whither turbulence computations? by K.R. Sreenivasan New York University A commentary on the work of Victor Yakhot Bos

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Whither turbulence computations? by K.R. Sreenivasan New York University A commentary on the work of Victor Yakhot Boston University Jörg Schumacher University of Ilmenau P.K. Yeung Georgia Institute of Technology Diego Donzis Texas A&M perhaps others Indian Institute of Science Tuesday December 13, 2011 TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAA

  2. From Herman Winick, SLAC

  3. Massive parallelism, with O(105) CPU cores; so doing simulations has become a big task in itself. New paradigms, new architecture, etc, yet… GMR Rl

  4. Earth Simulator Kaneda et al. (2003) 160 nodes, each node with 8 vector-type processors (total of 1280 processors); peak performance per processor is ~100 GFlops. Total peak performance is 130 TFlops. My collaborators have used: (1) Kraken at NICS/U of Tennessee (with 112,896 computer cores, peak performance of 1.17 Pflops and a memory of 147 TB)and (2) Jaguar at Oak Ridge (224,256 cores, peak performance of 1.75 Pflops, memory of 360 Tb) In the mean time… the 10 PetaFlop barrier has been broken by a Fujitsu machine and ExaFlops, 100 million (?) cores (~25MW) are on their way by ~2018: What is the hydrogen atom of turbulence?

  5. Phys. Rev. Lett. 28, 76 (1972)

  6. Dx = a1hK Box turbulence L = integral scale N = number of grid points hK = Kolmogorov scale Rl = microscale Reynolds number (√Re) a2L For “standard” conditions, a1 = 2, a2 = 5, we have Rl 0.5 W1/6, Rl 4.5 N2/3

  7. Rl

  8. If the Earth Simulator can compute N = 4096, Rl 1200 (Re  105) with L/hK = O(1000) Exaflop machines can handle: N = 32,768 Rl  4,000 (Re  106) L/hK = O(10,000) or 4 decades This ought to happen by the end of the decade But it won’t simulate anywhere as large a Reynolds number!

  9. e surrogate dissipation higher Re hK = (n3/<e>)1/4 h = (n3/e)1/4 Local dissipation scale can be far smaller. Duh.h/n = 1. Duh has large fluctuations. Thus, h can often be less than hk.

  10. Distribution of length scales Schumacher, Yakhot probability density of h/<h> log10 (h/<h>) • From the distribution of length scales, we have <td>= <hB2>/k  10 <hB>2/k • Eddy diffusive time/molecular diffusive time  Re1/2/100;exceeds unity only for Re  104 ( mixing transition mentioned by Narasimha yesterday)

  11. Scales smaller than h (and hB) clearly exist, so … • How much of the data acquired from resolutions of the order h is reliable? The question becomes more relevant at high Reynolds numbers. • What do we miss if we don’t resolve the sub-Kolmogorov scales? • How critical is it to resolve the sub-Kolmogorov scales for the inertial range (for example)? • How much better should be the grid resolution for the discrete version to remain “truthful” to the continuum equations?

  12. (in units of the mean) The 1283 box represents “standard resolution” From J. Schumacher

  13. Sn rn by Taylor’s expansion near r = 0

  14. Fronts of scalar dissipation in high-Schmidt number mixing Jörg Schumacher1, Herwig Zilken2, Katepalli R. Sreenivasan3,4 1 Department of Physics, Philipps University Marburg, D-35032 Marburg, Germany 2 Visualization Laboratory, Central Institute for Applied Mathematics, Research Center Jülich, D-52425 Jülich, Germany 3 International Centre for Theoretical Physics, I-34014, Trieste, Italy 4 Institute for Physical Science and Technology, University of Maryland, College Park, MD 20742, USA Pseudospectral simulation of scalar mixing for a Schmidt number of 32 in a homogeneous isotropic turbulent flow. The left picture shows a slice through the instantaneous scalar dissipation field. The color coding runs logarithmically from 0.00001 (blue) to 100 (red) in units of the mean scalar dissipation rate. Magnifications of the black frames in the left panel are plotted on the right. The Kolmogorov scale and Batchelor scales are indicated in each case. The grid resolution is also shown with N=1024 for a total box length of L=2p. In both magnifications scale variations around the Batchelor scale are excited and indeed observable because the grid resoluion in the simulations is better than the Batchelor scale. 100 Isosurfaces of the scalar dissipation field at a level of 11 in units of the mean scalar dissipation rate. The iso-surfaces are colored with respect to a flow property that is a measure of local vorticity [1,2]. This information is deduced from an eigenvalue analysis of the velocity gradient tensor at each grid point [1]. Green represents pure straining motion and red corresponds to the vorticity dominated motion. The picture illustrates that both flow topologies contribute to the steepening of intense dissipative fronts. Additionally, the gray-shaded cutting plane is shown. Support by the Deutsche Forschungs-gemeinschaft (DFG) and the US National Science Foundation (NSF) is gratefully acknowledged. Computations were done on the IBM-JUMP cluster at the John von Neumann-Institute for Computing. 0 References [1] J. Schumacher and K. R. Sreenivasan, Phys. Rev. Lett. 91, 174501 (2003) [2] J. Zhou, R. J. Adrian, S. Balachandar and T. M. Kendall, J. Fluid Mech. 387, 353 (1999)

  15. measurable differences Low scalar dissipation Another view of the same thing no conspicuous difference

  16. Theory for hsmallest Yakhot, Phys. Rev. E63, 026307 (2001) Kurien & Sreenivasan, PRE64, 056302 (2001) Yakhot & Sreenivasan, J. Stat. Phys.121, 823 (2005) Schumacher, Sreenivasan & Yakhot, New J. Phys.9, 89 (2007) • Derive exact dynamical equations for structure functions of all orders • Model pressure terms (or use the point splitting technique), and determine inertial scaling analytically • Match this inertial scaling with the smooth behavior for very small scales (which are analytic) • Pick the scale corresponding to moments of infinite order hsmallest/L = Re-1 (instead of Re-3/4, as for the standard Kolmogorov scale) N = Re3 (instead of the standard Re9/4 relation)

  17. A practical consequence Rl 0.5 W1/8, Rl 4.5 N1/2 (instead of Rl 0.5 W1/6, Rl 4.5 N2/3) Or, computational grid for a given Reynolds number Re  Re4 (instead of the traditional Re3 estimate from Landua & Lifshitz) present/traditional = O(Re) For Rl= 103, Re  105, ratio = O(105) A 40963 box can resolve all scales only up to Rl 300 (not 1200 as previously thought) A 32,7683 box can resolve all scales only up to Rl 1000 (not 4,000 as we might have projected)

  18. Previous work Mathematical Constantin, Foias, Manley and Temam, JFM 150, 427 (1985) The authors show that the degrees of freedom N of a 3-D turbulent flow obey N ~ (L/h)3 ~ Re3, and argue that the conventional estimate of Re9/4 (e.g., Landau & Lifshitz 1959), based on the Kolmogorov scale determined by the average dissipation rate, is optimistic. If the wavenumber spectrum varies as a power-law bounded on both sides with the roll-off rate of n, they show that (L/h)3 ~ Re6/(n+1), giving Re9/4 for n = 5/3. Phenomenological (a) From the measurements of Meneveau & Sreenivasan (1988+): Re4- (b) From Paladin & Vulpiani (1988): Re3-b, b >0 (c) From the She-Leveque model (1994): Re3.6 (d)

  19. (if Put so that Use Now, let r = Dx = chosen resolution. We then have

  20. <en>  Redn theory DNS RSH d1 0 0 0 d2 0.157 0.152 0.173 d3 0.489 0.476 ± 0.009 0.465 d4 0.944 0.978 ± 0,034 0.844 DNS are in the direction of the theory, but need to go to higher moments to be certain More detailed comparisons in Schumacher, KRS and Yakhot, New J. Phys. 9, 89 (2007)

  21. Reynolds number barrier? Improvements are happening with respect to • Size of transistors (now ~20nm) • Speed of communication (now ~10-4 c) • Density of information • Watts/CPU, etc, etc • Petascale → Exascale→ Zettascale • Algorithms are improving andPK may own one zettacale machine for full-time simulations for about 10 years ((instead of for a few months as now), but Limit of computability: Rl= 10,000 (only new paradigms (e.g., biomolecular transistors, quantum computing, etc) can push past this barrier Rl= 10 - 100 has come easy 100-1000 has come with some difficulty 1000 – 10,000 will come with extreme difficulty

  22. As said yesterday: Steve Orszag, a pioneer in DNS turbulence simulations saw the turbulence problem mostly as one of computability. With some luck, we will “soon” know everything worth knowing about box turbulence within a decade, and declare the problem as solved. Unfortunately, it will have to wait for some more time.

More Related