1 / 35

Data Assimilation Methods

Data Assimilation Methods. 7 December 2012. Thematic Outline of Basic Concepts. What are the fundamental characteristics of some simple data assimilation methods? What is three-dimensional variational data assimilation, and how does it operate?

rafal
Télécharger la présentation

Data Assimilation Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Assimilation Methods 7 December 2012

  2. Thematic Outline of Basic Concepts • What are the fundamental characteristics of some simple data assimilation methods? • What is three-dimensional variational data assimilation, and how does it operate? • What is diabatic initialization, and what purpose does it serve in determining the analysis state?

  3. Successive-Correction Methods • Examples: Cressman, Barnes schemes • Objective analysis techniques used to obtain a gridded analysis from point observations. • Spreads observation influence isotropically (conically decaying) rather than in a flow-dependent manner. • Mathematical framework: Section 6.6 of the text

  4. Successive-Correction Methods Uniform grid, non-uniform points. The analysis at a given grid point is influenced only by points within the circle of influence, with greater weight given to points with smaller rik.

  5. Successive-Correction Methods Nudging of the background to observations occurs only over the influence region of limited spatial extent. Nudging amplitude decays exponentially with increasing distance.

  6. Optimal Interpolation • This is the data assimilation method described to this point in the class. • Spatial influence of analysis increments influenced by the background error characteristics B, flow-dependent or not. • Contrast to successive-correction, where the spatial influence is purely isotropic. • Here, only a few nearby observations are considered important when determining the analysis increment. • Many of the covariances in B are thus equal to zero. • “Nearby” determined empirically (trial & error, experience)

  7. Optimal Interpolation • Optimal interpolation is most often applied as intermittent data assimilation. • As before, the analysis is produced by correcting the background state (xb) with knowledge of its error characteristics (B) and observational information (y).

  8. Variational Methods • What distinguishes variational data assimilation methods from non-variational methods? • Variational methods work iteratively to minimize the cost function that is a measure of the distance of the analysis to both the background and observation. • Non-variational methods are typically based upon least squares minimization of the analysis error variance or Newtonian nudging.

  9. Variational Methods • Recall: cost function and least squares minimization are functionally equivalent methods. • Least squares minimization: find an optimal gain matrix K that minimizes the analysis error covariance matrix A. • Cost function minimization: find an optimal analysis that minimizes the cost function defined by the distances from both the observations (y) and first guess estimate (xb). • As before, this ‘distance’ is a squared error. • Likewise, the squared errors are weighted by the precisions of the measurements, given by the inverse of R and B, respectively.

  10. 3D-Var Data Assimilation • Recall: simple variational example • Multidimensional analog: J(To) J(Tb)

  11. 3D-Var Data Assimilation • Similar to before, the minimum of J(x) is found where its gradient is equal to zero: • The minimum of J(x) can be found analytically, but it is computationally demanding to do so. • Instead, it is typically obtained iteratively, using both J(x) and its gradient.

  12. 3D-Var Data Assimilation • The minimum is approached over a small number of iterations using a desired minimization method. • In this regard, the goal is to extract the greatest amount of minimization in the fewest number of steps without passing a point of diminished returns. • The first iteration starts at xb, from which xa is approached by going down the gradient of J.

  13. 3D-Var Data Assimilation • Iteration continues until the Laplacian of J becomes small while the gradient of J is nearly zero. • This is functionally equivalent to least-squares, i.e., • Start at the background state xb. • Obtain xa from appropriately weighting (minimizing) the distance between y and xb.

  14. 3D-Var Data Assimilation

  15. 3D-Var Data Assimilation • Cost function is parabolic, with J(xb) at xb and J(xa) at xa (near the minimum in J(x)). • xb has large x1 and medium x2, while xa has medium x1 and medium x1 (where x1, x2 is the model space). • Minimization slides down the paraboloid over three iterations until it approaches the minimum, at which point it stops and attributes that to xa.

  16. Practical Manifestation Reference: COMET NWP Module “Understanding Assimilation Systems: How Models Create Their Initial Conditions” http://www.meted.ucar.edu/nwp/model_dataassimilation/navmenu.php?tab=1&page=6.4.0

  17. Practical Manifestation Adjust the background to match the observation (point adjustment)

  18. Practical Manifestation Adjust back toward the background (layer adjustment)

  19. Practical Manifestation Iterate until optimal balance is achieved (i.e., minimized cost function or analysis error variance is found)

  20. Practical Manifestation Update related/interdependent model fields (fundamentally included in the multidimensionality of the data assimilation problem via covariance terms)

  21. Practical Manifestation • Here, we use thermal wind balance, i.e., • Increased shear = larger meridional virt. temp. gradient • Nominally, this warms to the right of the shear vector and cools to the left of the shear vector. • This is what happens below the level of the observation to its SE! • Above that level, the inverse is true: decreased shear leads to a weaker gradient and cooling (warming) to right (left). (for zonal flow)

  22. Practical Manifestation But, we have an observation at that other location to account for. Repeat the process in steps 1-4 for this observation.

  23. Practical Manifestation This observation also produces a non-local response: warming in a layer increases the thickness, resulting in higher pressures above and lower pressures below, imparting an increment to the horizontal wind field.

  24. Practical Manifestation The 3D-Var problem attempts to optimally combine not just two, but many more than two observations! Recall: the dimensionality n x n – often O(108) or more!

  25. Benefits of 3D-Var vs. OI • 3D-Var considers the totality of the error covariance matrix (rather than just a reduced version thereof). • This produces a smoother, more realistic analysis. • 3D-Var methods typically encapsulate post-DA atmospheric state balancing within the cost function minimization process. • OI and other methods typically require a separate step to achieve this once an initial analysis has been obtained.

  26. Benefits of 3D-Var vs. OI • Data assimilation is done in observation space rather than model space. • Thus, data can be assimilated without the direct use of a retrieval algorithm. • The conversion to model space does require internal codes that act like retrieval algorithms, however. • Most available data sets include such codes within widely-available variational data assimilation systems.

  27. Benefits of 3D-Var vs. OI • The background error covariance matrix B for both 3D-Var and OI methods is typically flow-independent. • Time evolution for the simulation at hand is necessary for a flow-dependent B to be estimated. • Thus, 3D-Var may not produce the most optimal xa because it does not adjust to the observations in line with the present regime’s flow-dependent errors. • It only adjusts in line with less case-specific error estimates. • This motivates more advanced DA methods, which we will discuss in our next lecture.

  28. Diabatic Initialization • Methodology by which details about precipitation and the fields that it impacts are incorporated into a model’s initial analysis. • Relevant fields: moisture, vertical motion, divergence • Initial analyses do not typically have realistic, model-resolved kinematic and thermodynamic fields associated with precipitating features. • Necessitates model spin-up of these features. • Ex: deep, moist convection – may have too little vertical motion with too weak of a divergence profile in the vertical

  29. Diabatic Initialization • Basic premise of diabatic initialization… • Relate estimates of precipitation to the model dependent variables in some way. • Assimilate the observations to ‘correct’ the background state estimate that lacks such detail. • Presumes that the background state estimate is somewhat realistic to begin with. • Features in about the right place at about the right time. • Otherwise, no large-scale support for the assimilated features promotes their dissipation after simulation start.

  30. Diabatic Initialization • Method #1: Latent Heat Insertion • Prescribe latent heating profiles estimated from observations during pre-forecast integration. • May also be done alongside assimilation of other obs. • Uncertainties in latent heat profile retrieval important. • Manifest generally through an additive forcing term that acts not unlike a parameterization.

  31. Diabatic Initialization • Method #2: Diagnostic Approach • Use some sort of diagnostic means to relate ongoing diabatic processes at t = 0 to the model dependent variables. • Mitigates the need to spin-up a static initialization. • However, inserted fields do still need to come into balance once the model begins to integrate. NMMI = normal mode model initialization

  32. Diabatic Initialization • Method #3: Physical Initialization • Nominally, a reversal of the parameterization process.

  33. Diabatic Initialization • Parameterization: model-resolved dependent variables used to provide information about diabatic and/or subgrid-scale processes. • In the reverse, if you estimate the aggregate effects of the parameterized process, you can back out the model-resolved dependent variables driving it. • Most often used where direct observations are lacking (analysis strongly influenced by background).

  34. Diabatic Initialization • Quality of fields obtained depends upon the robustness of the reversed parameterization. • There should be consistency between the codes used to do this as well as integrate the model forward. • As with other methods, the resultant observations can be assimilated via a desired assimilation method. • Previous schematic: within an intermittent DA system

  35. Diabatic Initialization • With proper codes, 3D-Var and 4D-Var methods (among others) can directly assimilate precipitation details. • Can then be used to generate more realistic analyses. • An active, ongoing area of research within the field. • Examples in use today: HRRR, LAPS • More references available online or upon request.

More Related