1 / 39

Numerical Methods for Partial Differential Equations

Numerical Methods for Partial Differential Equations. CAAM 452 Spring 2005 Lecture 7 Instructor: Tim Warburton. Summary of Small Theta Analysis. The dominant remainder term in this analysis relates to a commonly used, physically motivated description of the shortfall of the method:.

emlyn
Télécharger la présentation

Numerical Methods for Partial Differential Equations

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Numerical Methods for Partial Differential Equations CAAM 452 Spring 2005 Lecture 7 Instructor: Tim Warburton

  2. Summary of Small Theta Analysis • The dominant remainder term in this analysis relates to a commonly used, physically motivated description of the shortfall of the method: Dissipative Unstable Dispersive Dispersive

  3. Dissipation in Action • Consider the right difference version: • We are going to experimentally determine how much dissipation the solution experiences.

  4. Testing Methodology • Reduce the time-stepping error to a secondary effect by choosing a 4th order Runge-Kutta (JST) time-integrator and a small dt. • Fix the method (choose one of the difference operators for the spatial derivative) • Do a parameter study in M, i.e. we ask the questions: how does increasing the number of data points change the error. • We need to understand what questions we are asking: • Is the computer code stable (as predicted by the theory)? • Does the computer code converge (as predicted by the theory)? • What order of accuracy in M do we achieve? • We hypothesize that if the theory holds then we should achieve: • What is the actual approximate p achieved?

  5. Initial Condition • Because we do not wish to introduce uncertainty over the source of errors in the computation we use an initial condition which is infinitely smooth.

  6. Computing Approximate Order of Accuracy • We compute the error by: • We will compute the approximate order of accuracy by assuming: • where C is independent of M • If we measure the error for two different M then we can eliminate the constant: • In the case when M2=2M1

  7. After 4 Periods • The numerical pulses are in the right place but have severely diminished amplitude.

  8. After 4 Periods After 4 periods the solution is totally flattened in all but the last 2 results. If we botheredto keep increasing M we would eventually see the error decline as 1/M

  9. The Unstable Left Stencil • I repeated this with everything the same, except the choice of instead of We clearly see that there is initial growth near the pulses, but eventually the dominant feature is the highly oscillatory and explosive growth (large m in the above red term).

  10. Cont (snapshot)

  11. Dispersion in Action • Consider the central difference version: • With the same time integrator before, M=100,… • We note that the remainder terms are dispersive corrections – i.e. they indicate that modes of different frequency will travel with different speed. • Furthermore, to leading order accuracy the higher order modes will travel more and more slowly as m increases.

  12. 2nd Order Central DifferenceAfter 4 Periods • We notice that the humps start to shed rear oscillations as the higher frequency Fourier components lag behind the lower frequency Fourier components.

  13. Convergence Study (t=8) What should we use as an error Indicator ??

  14. 2nd Order Central Difference • We do not see a convincing 2nd order accuracy • I computed this by log2(error_M/error{M-1})

  15. 4th Order Central Differencing

  16. 4th Order Central Difference • We see pretty convincing 4th order accuracy • I computed this by log2(error_M/error{M-1})

  17. 6th Order Central Difference • We see pretty convincing 6th order accuracy • I computed this by log2(error_M/error{M-1})

  18. Summary of Testing Procedure • Understand what you want to test • Keep as many parameters fixed as possible • If possible, perform an analysis before hand • Run parameter sweeps to determine if code agrees with analysis. • Estimate error scaling with a single parameter if possible. • It should be apparent here that the errors computed in the simulations only approximate the tidy power law (dx^p) in the limit of small dx (large M). • It is quite common to refer to the range of M before the error scaling applied as the “preasymptotic convergence range”. • The preasymptotic range is due to the other factors in the error estimate which are much larger than the dx^p term (i.e. the p’th derivatives may be very large, or the constant may be large…) • In this case we heuristically set dt small, using a high-order Runge-Kutta time integrator, and then performed a parameter sweep on M. We could have been unlucky and the time-stepping errors may have been dominant which would mask the dx behavior. • To be careful we would try this experiment with even smaller dt and check if the scalings still hold.

  19. Summary of our Approach to Designing Finite Difference Methods • We have systematically created finite difference methods by separating the treatment of space and time derivatives. • Then designing a solver consists of choosing/testing: • a time integrator (so far we covered Euler-Forward, LeapFrog, Adams-Bashford(2,3,4), Runge-Kutta) • a discretization for spatial derivatives • a discrete differential operator which has all eigenvalues in the left half of the complex plane (assuming the PDE only admits non-growing solutions). • Possibly using Gerschgorin’s theorem to localize the eigenvalues of the discrete differentiation operator • dt so that dt*largest eigenvalue (by magnitude) of the derivative operator sits inside the stability region  stability. • (small dx) spatial truncation analysis  consistency and accuracy. • Fourier analysis for classification of the differential operator. • Writing code and testing.

  20. Some Classical Finite Difference Methods • However, there are a number of classical methods which we have not discussed and do not quite fit into this framework. • We include these for completeness..

  21. Leap Frog (space and time) • We use centered differencing for both space and time. • We know that the leap frog time stepping method is only stable for operators with eigenvalues in the range: • However, we also know that the centered difference derivative matrix is a skew symmetric matrix with eigenvalues: • So we are left with a condition: i.e.

  22. cont • We can perform a full truncation analysis (in space and time): • We know that dt <= dx so

  23. Lax-Friedrichs Method • We immediately conclude that the following method is not stable: • Because the Euler-Forward time integrator only admits one stable point (the origin) on the imaginary axis, but the central differencing matrix has all eigenvalues on the imaginary axis. • However, we can stabilize this formula by replacing the second term in the time-stepping formula:

  24. cont • This formula does not quite fit into our constructive process (method of lines approach). • We have admitted spatial averaging into the discretization of the time derivative. • We can rearrange this:

  25. cont • We can immediately determine that this is a stable method as long as c*dt/dx <=1 • Given, this condition we observe that the time updated solution is always bounded between the values of the left and right neighbor at the previous time because this is an interpolation formula.

  26. cont • A formal stability analysis would involve: • Which gives stability for each mode if:

  27. cont • Thus considering all the possible modes • We note that the middle mode requires: • This condition gives a sufficient condition for all modes to be bounded. • By the invertibility and boundedness of the Fourier transform we conclude that the original equation is stable.

  28. cont • We can recast the Lax-Friedrichs method again • This method consists of Euler Forward, central differencing for the space derivative and an extra dissipative term (i.e. a grid dependent advection diffusion equation: )

  29. CFL Number • The ratio appears repeatedly, in particular in the estimates for the maximum possible time step. • We refer to this quantity as the CFL (Courant-Friedrichs-Lewy) number. • Bounds of the form: which result from a stability analysis are frequently referred to as CFL conditions.

  30. Lax-Wendroff Method • We are fairly free to choose the parameter in the stabilizing term: • The artificial viscosity term acts to shift the eigenvalues of the spatial operator into the left half-plane. • Recall – the Euler-Forward stability region is the unit circle centered at -1 in the complex plane. So pushing the eigenvalues off the imaginary axis allows us to choose a dt small enough to push the eigenvalues of the discrete spatial operator into the stability region…

  31. Class Exercise • Please provide a CFL condition for: • In terms of: • What is the truncation error?

  32. Von Neumann Analysis • By stealth I have introduced the classical Von Neumann analysis. • The first step is to Fourier transform the finite difference equation in space. • A short cut is to make the following substitutions:

  33. Analysis cont • We can also make the substitutions for the difference operators:

  34. Example • The Lax-Wendroff scheme • becomes

  35. Example cont • So the Lax-Wendroff scheme becomes a neat one step method for each Fourier coefficient: • We have neatly decoupled the Fourier coefficients so we are back to solving a recurrence relation in n for each m: • Hence we need to examine the roots of the stability polynomial for the above

  36. Example cont • This case is trivial as we just need to bound the single root by 1 • Plot it this as a function of theta…

  37. Final Von Neuman Shortcut • We can skip lots of steps by making the direct substitution: • Here g is referred to as the amplification factor and i*m*theta is our previous theta_m

  38. Next Time • Definition of consistency • Lax-Richtmyer equivalence theory • Treating higher order derivatives

  39. Homework 3 Q1) Build a finite-difference solver for Q1a) use your Cash-Karp Runge-Kutta time integrator from HW2 for time stepping Q1b) use the 4th order central difference in space (periodic domain) Q1c) perform a stability analysis for the time-stepping (based on visual inspection of the CK R-K stability region containing the imaginary axis) Q1d) bound the spectral radius of the spatial operator Q1e) choose a dt well in the stability region Q1f) perform four runs with initial condition (use M=20,40,80,160) and compute maximum error at t=8 Q1g) estimate the accuracy order of the solution. Q1h) extra credit: perform adaptive time-stepping to keep the local truncation error from time stepping bounded by a tolerance.

More Related