1 / 15

Data Assimilation Theory CTCD Data Assimilation Workshop Nov 2005

Data Assimilation Theory CTCD Data Assimilation Workshop Nov 2005. Sarah Dance. Data assimilation is often treated as a black box algorithm. OUT Analysis. IN Observations and a priori information. (apologies to Rube Goldberg).

maxine
Télécharger la présentation

Data Assimilation Theory CTCD Data Assimilation Workshop Nov 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Assimilation Theory CTCD Data Assimilation Workshop Nov 2005 Sarah Dance

  2. Data assimilation is often treated as a black box algorithm OUT Analysis IN Observations and a priori information (apologies to Rube Goldberg) BUT, understanding and developing what goes on inside the box is crucial !!

  3. Some DARC DA Theory Group Projects

  4. Formulations of the Ensemble Kalman Filter and Bias MSc thesis by David Livings, supervised by Sarah Dance and Nancy Nichols

  5. Outline • Bayesian state estimation and the Kalman Filter • The EnKF • Bias and the EnKF • Conclusions

  6. Prediction (between observations) e.g. Suppose xk = Mxk-1+  M is linear, the prior and model noise are Gaussian P(xk-1) ~ N(xb, P) ~ N(0, Q) Then P(xk |xk-1) ~N(Mxb, MPMT+Q)

  7. At an observation we use Bayes rule Prior Background error distribution Likelihood of observationsObservation error pdf Bayes rule

  8. Bayes rule illustrated

  9. Bayes rule illustrated (cont)

  10. The Kalman Filter • Use prediction equation and Bayes rule • Assume linear models (forecast and observation) • Assume Gaussian statistics • Kalman filter BUT • Models are nonlinear • Evolving large covariance matrices is expensive (106 x 106 in meteorology) • So use an ensemble (Monte Carlo idea)

  11. = = =

  12. Results with ETKF (old formulation) and Peter Lynch’s swinging spring model N=10, Perfect observations Red ensemble mean Blue ensemble std. Error bars indicate obs std. Ensemble statistics not consistent with the truth!

  13. Bias and the EnKF • Many EnKF algorithms, can be put into a “square root” framework. • Define an ensemble perturbation matrix: So, by definition of the ensemble mean

  14. Square-root ensemble updates • The mean of the ensemble is updated separately. • Ensemble perturbations are updated as • where T is a (non-unique) square root of an update equation. • Thus, for consistency, • David discovered that not all implementations preserve this property. • We have now found nec. and suff. conditions for consistency.

  15. Consequences • The ensemble will be biased • The size of the ensemble spread will be too small • Filter divergence is more likely to occur ! • Care must be taken in algorithm choice and implementation

More Related