1 / 37

ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course

ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course. 1 to 4 July 2013. ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course. The estimation and correction of systematic errors (with some examples from climate reanalysis). Why do we need to worry about biases ?.

Télécharger la présentation

ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course 1 to 4 July 2013

  2. ECMWF/EUMETSAT NWP-SAF Satellite data assimilation Training Course The estimation and correction of systematic errors (with some examples from climate reanalysis)

  3. Why do we need to worry about biases ? errors should be random and gaussian Systematic errors must be removed otherwise biases will propagate in to the analysis (causing global damage in the case of satellites!). A bias in the radiances is defined as: bias = mean [ Yobs – H(Xtrue) ]

  4. Why do we need to worry about biases ? ERA-40 Cosmic shower failure of MSU on NOAA-11 ERA-15 GLOBAL 200hPa temperature

  5. The definition of a bias: What we would liketo quantify is: mean [ Yobs – H(Xtrue) ] But in practice all we can monitor / measure is : mean [ Yobs – H(Xb/a) ] …see dedicated lecture on monitoring….

  6. Potential sources of bias: • Instrument calibration / anomalies • Instrument characterisation • Radiative transfer model / spectroscopy • Surface emissivity model • Observation QC / selection / scale • NWP model used to diagnose bias

  7. Characteristics of the bias: • Simple constant offset • Geographically / air-mass varying • Scan dependent • Time dependent • Satellite dependent • NWP model dependent

  8. Air-mass variation of the bias:

  9. Scan variation of the bias: NOAA-18 AMSUA temperature sounding channels limb limb limb limb nadir nadir

  10. Time variation of the bias: diurnal dependence of bias (K) Seasonal dependence of bias (K) drifting dependence of bias (K) Dec 2004dateJune 2004

  11. Satellite dependence of the bias: HIRS channel 5 (peaking around 600hPa on NOAA-14 satellite has +2.0K radiance bias against model HIRS channel 5 (peaking around 600hPa on NOAA-16 satellite has no radiance bias against model.

  12. Characteristics of the bias:

  13. Sources and Characteristics of the bias:

  14. How do we correct for biases ? • The type of correction used must be suited to the types • of bias we have in our system and what we wish to correct (or perhaps more importantly what we do not wish to correct). • Simple constant offset C • Static air-mass predicted correction C[p1,p2,p3…] • Adaptive (in time) predicted correction C [p1,p2,p3….,t]

  15. A predictor based bias correction: We pre- define a set of predictors [P1, P2, P3…] From a training sample of departures: [ Yobs – H(Xb/a)] we find the values of the predictor coefficients that best predict the mean component of the departures. Predictors might be: mean temperature, TCWV, ozone, scan position, surface temperature etc..

  16. Adaptive predictor based bias correction: We pre- define a set of predictors [P1, P2, P3…] From a training sample of departures: [ Yobs – H(Xb/a)] we find the values of the predictor coefficients that best predict the mean component of the departures. The training sample will generally be the radiance departure statistics of the current assimilation window and the values of the predictor coefficients will be updated each analysis cycle (e.g. every 12 hours)

  17. Adaptive predictor based bias correction: External adaptive bias correction Perform analysis Perform analysis Update bias coefficients Update bias coefficients Internal adaptive bias correction Perform analysis + update bias coefficients Perform analysis + update bias coefficients

  18. Internal adaptive predictor based bias correction (VarBC) Internal adaptive bias correction Perform analysis + update bias coefficients Perform analysis + update bias coefficients

  19. Bias corrections of MSU2 in ERA-Interim NOAA-14 recorded warm-target temperature changes, due to orbital drift (Grody et al. 2004) Jan 1989: Transition between two separate production streams

  20. When bias corrections go wrong • Correction of NWP model error • Under adaptive (Pinatubo) • Over adaptive • Interaction feedback with QC

  21. Correction of NWP model error Our training sample is mean [ Yobs – H(Xb/a) ] IASI channel 76

  22. Correction of NWP model error Our training sample is mean [ Yobs – H(Xb/a) ] Bias correction anchored to zero in Nov-07 for cycle 35R1 T799/L91 VARBC NOAA-16 AMSUA channel 14

  23. Under adaptive correction Our training sample is mean [ Yobs – H(Xb/a) ] ERA-40 Cosmic shower failure of MSU on NOAA-11 ERA-15 200hPa temperature

  24. Under adaptive correction Our training sample is mean [ Yobs – H(Xb/a) ] NOAA-10 NOAA-12

  25. Under adaptive correction Our training sample is mean [ Yobs – H(Xb/a) ]

  26. Interaction with QC Our training sample is mean [ Yobs – H(Xb/a) ]

  27. How do we define and spot a bias ? • Sources of bias and their characteristics • Should we correct a bias or not ? • Complexity spectrum of bias correction > adaptive bias correction • Constraining bias corrections • Examples (good and bad)

  28. When corrections go wrong : • Correction of NWP model error • (we may require above in some cases) • Under adaptive (pinatubo) • Over adaptive • Interaction feedback with QC

  29. How do we stop corrections going wrong : • Restrict number of predictors • Restrict values of predictors • Use of intelligent pattern predictors • Restrict time evolution of predictors • Anchoring • Use of the MODE

  30. How do we stop corrections going wrong : • Restrict number of predictors • Restrict values of predictors • Use of intelligent pattern predictors • Restrict time evolution of predictors • Anchoring • Use of the MODE

  31. A highly complex / adaptive correction of satellite temperature data has caused a strengthening of the N – S thermal gradient and degraded the U-component of wind, compared to a simple flat correction of the data. Flat bias Complex bias With too many predictors the satellite data produces a mean analysis wind fit similar to a NO-SAT system !

  32. How do we stop corrections going wrong : • Restrict number of predictors • Restrict values of predictors • Use of intelligent pattern predictors • Restrict time evolution of predictors • Anchoring • Use of the MODE

  33. Anchoring with zero bias correction AMSUA channel 14

  34. Anchoring with zero bias correction

  35. How do we stop corrections going wrong : • Restrict number of predictors • Restrict values of predictors • Use of intelligent pattern predictors • Restrict time evolution of predictors • Anchoring • Use of the MODE

  36. Interaction with QC Our training sample is mean [ Yobs – H(Xb/a) ] MODE MEAN

  37. End

More Related