1 / 56

An Upgraded Algorithm for Shell Model Calculations and its Implementation in Medium-Heavy Nuclei

An Upgraded Algorithm for Shell Model Calculations and its Implementation in Medium-Heavy Nuclei D. Bianco Naples F. Andreozzi F. Knapp N. Lo Iudice A. Porrino Vietri10. Currently adopted methods.

amelie
Télécharger la présentation

An Upgraded Algorithm for Shell Model Calculations and its Implementation in Medium-Heavy Nuclei

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Upgraded Algorithm for Shell Model Calculations and its Implementation in Medium-Heavy Nuclei D. Bianco Naples F. Andreozzi F. Knapp N. Lo Iudice A. Porrino Vietri10

  2. Currently adopted methods • Direct Diagonalization: Lanczos:AntoineE. Caurier et al. Rev. Mod. Phys. 77, 427 (2005) for short review • Stochastic methodology: Monte Carlo(C.W. Johnson et al. PRL 92), suitable for ground state. Minus sign problem. • In between: Quantum MC (M. Honma et al. PRL 95). MC to select the relevant basis states. Problems: Redundancy, symmetries broken by the stochastic procedure. • Truncation method: Density Matrix Renormalization Group (DMRG) (J. Dukelsky and S. Pittel, Rep. Prog. Phys. 67, 513 (2004)

  3. Iterative diagonalizationalgorithmA. Andreozzi, A. Porrino, and N. Lo Iudice J. Phys. A 02 • Let I = Σ |i><i| A (= A† )≡ { aij} = { < i | Â | j > } A ≡ Goal: Determine the lowesteigenvalue and eigenvector a11 a12 a13 a14 …….. a1N a21 a22 a23 a24 ……..a2Na31 a32 a33a34 …….. a3Na41 a42 a43 a44 …….. a4N………………………..aN1 ………………….. aNN

  4. λ2 b23 b23 a33 (λ2 , | φ2 > ) b23 = < φ2| Â | 3 > (λ3 , | φ3 > ) …………………………………………. b34 = < φ3| Â | 3 > ……………………………………………………………………….. a11 a21 a12 a22 first iteration loop λ3 b34 b34 a44 (λN-1 , | φN > ) λN-1 bN-1 N bN-1 N aNN (λN , | φN > ) (bN = <φ3|A|N> ) λN  E(1) , |φN > = |(1)> =  ci(N) | i>i = 1, N End first iteration loop

  5. Second iteration loop • Def.λ1 = E(1) |φ1> = | ψ(1) > • Compute b1 = < φ1| Â | 1 > • …… ith Iteration loop E(1), (1) E(2), (2)……..E(i), (i)….. THEOREM If the sequenceE(i)converges , then E(i)(i)E

  6. From 1 to v eigensolutions Λh Bh λj-1 bj bj ajj BhT Ã • The algorithm has a variational foundation. • Because of its matrix formulation, it can be easily generalized so as to generate at once an arbitrary number v of eigensolutions • The structure of the algorithm remains unchanged: • We need just to make thecorrespondence

  7. λ10 …. 0 b11 ...….b1j0 λ2 ….0b21 ……. b2j………. ……. 0 0 ….. λvbv1 ……. bvjb11 ...….bv1a11 …….. a1jb12 …… bv2 a21 …….. a2j……. ……b1j ……. bvjaj1 ..….. ajj Implementation Start with avxv matrix Av, after diagonalization (Av →Λv) and redefinition of the basis (|i> →|φi>), compute bij and embed Λvinto a larger matrix Av+r Av+r = • diagonalize Av+r and extract thenew lowest v roots • redefine the basis (| φi > →|ψi>) and iterate as for the 2x2 matrix

  8. Features of the algorithm • Easy implementation • Variational foundation • Robust • Convergence to the extremal eigenvalues • Numerically stable and ghost-free solutions • Orthogonality of the computed eigenvectors • Fast : O( N2) operations • But not enough! c

  9. M-scheme • Virtue: calculation of H very easy. • Shortcoming: the basis dimensions become huge. • Remedy: H is sparse in m scheme

  10. Procedure • Sort {j1n1… jmnm}(≡ partitions) (according to increasing energies) • Choose D Id = Σd |i><i| = Σd| j1n1… jmnm > < j1n1… jmnm | • Property Jk | j1n1… jmnm >  D • New Hamiltonian HJ = H + c [J2 – M(M+1)] • Generate (by the algorithm) {(HJ)ij} {λk}, {k(J)}[k=1,..v (<d)]

  11. M0 MMM0 Λ0≡{λ1..λi.. λv}0 Λ0 M1 M’MM0 Λ1≡{λ1..λi.. λv}1 Λ1 M2 M’’ ……......... SM spacedecompositionI= M0 + M … ΛN-1 MN ΛN≡{λ1..λi.. λv}N exact eigenvalues Analogy: realspace (Wilson) e density matrix (White) renormalizationgroup

  12. Sampling procedure I. Identify all configurations that couple to {k(J)} |< k(J)|H |j>|2/ ( ajj – λk) > εj=d+1,…..,r(<N) II. Confine ourselves to the space IS = IdIr III. Apply in ISthe algorithm iterative procedure andgeneratenew v eigensolutions {Ek(J) (1) ,k(J) (1)} (k=1,..v) Repeat steps I to III for the new set {k(J)(1)} After several iteration loops we generate the sequence {Ek(J) (1) ,k(J) (1)}, {Ek(J) (2) ,k(J) (2) }, ……..{Ek(J) (i) ,k(J) (i)} …. Convergence {Ek(J) (i) ,k(J) (i)} → {Ek(J) ,k(J)}.

  13. Eigenvalues: Convergence

  14. Eigenvalues: Convergence

  15. B(E2): Convergence

  16. B(E2): Convergence

  17. Spectra

  18. Spectra Th Exp

  19. THANK YOU

  20. Numerical applications: 48Cr - Model space (pf) shells for both HnandHp - Hamiltonian H = HNils + GKB3 • Space Dimensions N= 1.9 x 106

  21. 48Cr

  22. B(M1): Convergence

  23. 48Cr: Comparative analysis

  24. 48Cr: IS and convergence

  25. Sn Model space 1h11/2 3s1/2 2d3/2 1g7/2 2d5/2 Hamiltonian H = HNils + GCDBonn Dimensions (116Sn) N = 1.6x 107

  26. 110Sn

  27. 116Sn: ConvergenceandExtrapolation

  28. 116Sn: B(E2)

  29. Extrapolation Saturation reached at 48Cr n~ 8x 105with a ratio x=n/N ~ 0.4 116Sn n~ 6x 104with a ratio x=n/N ~ 0.004 !!! Extrapolation law λ (n) = a + b (N/n) + c exp(- N/n)

  30. Conclusions • The algorithm is simple, robust and has a variational foundation • Once endowed with the importance sampling, • it keeps the extent of space truncation under strict control • It reaches saturation very early c) it allows for extrapolation to exact eigensolutions • It is very promising for heavy nuclei

  31. THANK YOU

  32. Sampling and truncation

  33. Energy distribution of basis states

  34. Convergence • The eingenvalues reach rapidly a plateau • The convergence is even much faster in 133Xe • Theconvergenceis generallysmooth • Notice, however, thesharp turnin48Cr. It corrsponds to energy crossing proving the stability of the solutions

  35. Convergence of wave functions • The overlaps with the exact eigenvectors reach soon unity even if the starting value is small • Notice the fluctuations at small n. They correspond to the energy crossing pointed out already as an indicaion of the stability of the solutions

  36. B(E2) • A very stringent test since the states are non collective and the E2 operator M(E2) may pick components with very small amplitudes • The very fast convergenge is to be noticed indicating the • reliability and efficiency of the sampling

  37. Scaling laws (For an alternative scaling see Hori et al. PRL 82 (1999)) • Scaling with n (number of sampled states) λ (n) = a + b (N/n) exp(-c N/n) ε (n) = (d/n2) exp(-c N/n) high precision extrapolation n  N

  38. Heuristic argument • consistency ε(n)  dλ / dn • from the sampling condition Δλ = Σj Δλj = Σ i = 1,v | λi’ - λi | = Σj bj2 / ( ajj – λ- Δλj ) by expanding in Δλj, we obtain in lowest order Δλ = Σj Δλj = Σj bj2 / ( ajj - λ)

  39. In the convergence region • ajj - ann -  n • bj2 = <j-1 |H| j>2 =(ici <i| H|j>)2 ( i << j)  Hij small and random for j < n = 0 for j> n bj2  exp (- j/n)

  40. Δλ = Σj Δλj = Σj bj2 / ( ajj - λ) ajj -  n bj2  exp (- j/n) • Δλ  b (N/n) exp(-c N/n) • ε(n)  dλ / dn  (d/n2) exp(-c N/n)

  41. Conclusions • The algorithm is simple, robust and has a variational foundation • Once endowed with the importance sampling, a) it keeps the extent of space truncation under strict control b) it allows for extrapolation to exact eigensolutions • It is very promising for heavy nuclei

  42. THANK YOU

  43. •The solution of the eigenvalue problem requires the diagonalization of the many-body Hamiltonian H in a space of very large dimensions • Standard diagonalization methods are very lengthy: The CPU time goes like N3 (N ≡ dimensions of the H matrix) (Wilkinson, The algebraic eigenvalue problem, Clarendon Press, Oxford 1965) • In most cases, only a few (very often one) eigenstates of a given J and T are needed. • The non zero matrix elements of H grow only linearly with N •  Adaptive diagonalization algorithms which efficiently identify the relevant pieces of H are more suitable

  44. Relation to Renormalization Group (RG) • RG methods were (are) adopted in solid state to study strongly correlated systems (where mean field breaks down) • One adopts simplified model Hamiltonians incorporating the core ingredients to describe specific phenomena • Hubbard model H = - Σtij + U Σninj • Heisenberg model H = ΣJij SiSj • In many cases only exact diagonalization or MC are reliable • RG aims at solving the exact problem in the most efficient way

  45. Real space RGK.G. Wilson Reb. Mod. Phys. 47, 773 (1975) • Describe interactions on a sublattice forming a block A of length l by a block HamiltonianHAin a space of dimensions M • Form a compound block AA of length 2l and the Hamiltonian HAA of dimensionsMxM • DiagonalizeHAAto find the lowesteigenstates • Project HAAinto the truncated space spanned by the lowest M eigenstates • Restart the process • The method, successfull for the Kondo problem, gives poor results in general • Reason: Boundary conditions • When solving inA, the BC impose vanishing of WF at the extremes, but this zero corresponds to a maximum in AA!

  46. Describe a block A of length l by a block HamiltonianHAin a space of dimensions M Form a compound block AA of length 2l and the Hamiltonian HAA of dimensionsMxM DiagonalizeHAAto find the lowesteigenstates Project HAAinto the truncated space spanned by the lowest M eigenstates Start again The method gives poor results because of inconsistent boundary conditions in A ersus AA In our case the blockA is the vxv matrix We embed A into a larger matrix of sizes (v+k)x(v+k) We diagonize the larger matrix and pick thelowest v eigenvalues We redefine a new basis in a v-dimensional space We restart the iteration with the new basis In our case the boundary conditions are the same in any space Relation to the real space RGK.G. Wilson Reb. Mod. Phys. 47, 773 (1975)

  47. Density matrices RG (DMRG)S. R. White PRL 69, 2863 (1992) • Assume a blockS of lengthl in a Ms dimensional space and a similar blockE • Add a site to each block and form new enlargedblocks S’and E’ • Form the superblock of dimension2l+2 from S’ and E’ • DiagonalizeH in the superspace • Diagonalize the density matrixρ = TrEΨΨ obtaining the eigenvalues wα(weights) • Form a new reduced basis for S’ out of the MSeigenstates with the largest weights wα • By diagonalizing only in the superspace, the inconsistency coming from the boundary conditions is avoided • The method is quite successful in solid state

  48. DMRG in Nuclear PhysicsJ. Dukelsky and S. Pittel, Rep. Prog. Phys. 67, 513 (2004) • In the original version the blocks were composed of sets of particle and hole states respectively • Results: not very encouraging • In the new version (T. Papenbrock and D.J. Dean, J. Phys. G 31, S1377 (2005), S. Pittel and N. Sandulescu PRC 73, 014301 (2006)), • they partition the SM space according to the phylosophy adopted in our approach. The only difference is that they still use the diagonalization of the density matrix to truncate the space • As pointed out, however,there are no compelling reasons for doing so

  49. I. Iterate up to N-1 II. Update the basis { | φN-1 >, | N >} III. Compute bN = < φN-1| Â | N > λN-1 -λ bN bN aNN -λ IV. Diagonalize the matrix Det( ) = 0 V. Pick up the lowest eigenvalue and eigenvector λN  E(1) , |φN > = |(1)> =  ci(N) | i>i = 1, N End first iteration loop

  50. We, then, start a new iteration obtaining the sequence E(1), (1) E(2), (2)……E(i), (i)….. THEOREM If the sequenceE(i)converges , then E(i)E (eigenvalue of the matrix A) (i) (eigenvector of the matrix A)

More Related