1 / 62

OEM retrievals with IASI, AMSU and MHS PM1 Telecon 9 April 2014 R.Siddans , D. Gerber (RAL),

OEM retrievals with IASI, AMSU and MHS PM1 Telecon 9 April 2014 R.Siddans , D. Gerber (RAL), . Agenda. 15:00 Review KO minutes / actions 15:10 Task 1: summary of literature review 15:25 Task 1: Analysis of AMSU+MHS observation errors based on FM simulations

urbain
Télécharger la présentation

OEM retrievals with IASI, AMSU and MHS PM1 Telecon 9 April 2014 R.Siddans , D. Gerber (RAL),

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OEM retrievals withIASI, AMSU and MHSPM1 Telecon9 April 2014R.Siddans, D. Gerber (RAL),

  2. Agenda • 15:00 Review KO minutes / actions • 15:10 Task 1: summary of literature review • 15:25 Task 1: Analysis of AMSU+MHS observation errors based on FM simulations • 15:40 Task 2: Summary of Task 2 & 3 results including • Comparison of RAL and Eum ODV results • Comparison of IR and MWIR retrievals over land and sea • 16:30 Plans for remaining tasks • 16:45 Discussion • date for next meeting • 17:30 close

  3. Actions from KO

  4. Task 1: Literature Review • Overview of literature presented on AMSU data processing • Different methods used to analyse the measurements (and errors) • Presentation of the most significant results • Conclusions for our own study

  5. Different Data Processing Methods • Linear Regression Algorithms • A “heuristic” relation between scene brightness temperate and humidity for select channels is exploited. No error treatment, so less useful as a source of information. • Physical Methods (i.e. OEM) • Finding the most likely state within the boundaries of measurement errors and climatological variability. Requires a solid assessment of all errors, hence good source of information. • Neural Networks • “Black-box” handling of measurement/instrument errors in the training of the network, so no explicit error quantification.

  6. Summary of Literature and Relevant Findings

  7. Overview of AMSU Random Errors from Literature

  8. Comparison of MetOfficeSy vs. Literature 6 5 4 3 2 1 0

  9. Overview of AMSU Systematic Errors from Literature

  10. Some Specific Findings • Atkinson 2001: There was a 40K bias of some AMSU-B channels pre September 1999 (data transmitters). • Wu 2001: RTTOV statistics compared to observations indicate random errors (and biases) far larger than pure NEBT. • Chou 2004: Standard deviation of error is different for off-nadir views than for nadir view. The sign of the difference is channel dependent! • Olsen 2008: Channel 4 (post Aug 2007 NEBT increase) no information on surface or atmosphere – use for cloud flagging only. • Mitra 2010: Temperature anomaly in channel 7 (They exploit it to detect cyclones). • Generally NEBT was higher at the start (Atkinson 2001) and higher towards the end (MacKaque 2001, 2003).

  11. Some Specific Findings • Atkinson 2001: Slight gain drop and NEBT increase in Chs.18 & 20. Thermal oscillation of Ch.16 in early 1999, also Temp anomaly of Ch.17. • Li 2000, Rosenkranz 2001: Critical dependence on first-guess profile (iterative pre-selection). Geomagentic field correction to Ch.14. • Eyre 1998: Retrieval more affected by correlations in background error covariance matrix than observation error.

  12. Conclusions • NEBT values in literature roughly consistent. Increased numbers (in some channels) for later publications. • Some channels require bias correction (corrected in latest version of Lv1b data). • Some channels have intermittent problems (abnormal bias or NEBT, so select dates accordingly) • Most recent data of NEBT consistent with Met Office “diagnosed error”. • All records of total measurement error from NWP analysis consistent with Met Office “operational error”.

  13. Testimng RAL implentation: RAL vs Eumetsat RT simulations

  14. RAL vs Eumetsat Initial Cost function

  15. Estimation of AMSU+MHS errors: Simulations from PWLR

  16. Observation – simulations (PWLR)

  17. Observation - simulations

  18. Observation - simulations

  19. Observation – simulations (IASI)_

  20. Observation – simulations (after bias correction and retrieval)

  21. Observation – simulations (x-track dependence, from PWLR)

  22. Observation – simulations (x-track dependence, from IASI retrieval)

  23. Observation – simulations after MW bias correction

  24. Observation coveariance derived from MW residuals from IASI retrieval

  25. Task 2 & 3 • Retrievals run over both sea (T2) and land (T3) • All 3 days (17 April, 17 July, 17 October 2013) • IR-only retrievals compared to Eumetsat ODV • Differences small cf noise and mainly related to different convergence approach, which affects scenes for which final cost high (deserts, sea ice) • MWIR retrieval run with 2 options for Sy • Correlated (as previous slide) • Uncorrelated (same diagonals) • Linear simulations also performed for 4 sample scenes to assess information content • Additional case of 0.2K NEBT (uncorrelated) • Approximate perfect knowledge of MWIR emissivity

  26. Summary of DOFS

  27. Summary from Linear Simulations • Using the derived observation errors, IASI+MHS add 2 degrees of freedom to temperature and about half a degree of freedom to water vapour. • Effects on ozone are negligible. • Neglecting off-diagonals reduces DOFS on temperature and water vapour by about 0.1 (a small effect). • For temperature, the improvements are related mainly to the stratosphere though some improvement is also noticeable in the troposphere, esp over the ocean (where the assumed measurement covariance is relatively low). • For water vapour improvements are mainly related to the upper troposphere, and penetrate to relatively low altitudes in the mid-latitudes. • Assuming 0.2 K NEBT errors to apply to all channels adds an additional degree of freedom to temperature and an additional half a degree of freedom to water vapour, in some cases considerably sharpening the near-surface averaging kernel.

  28. Assessment of full Retrieval • Based on comparing retrieval to analysis (ANA), Eumetsat retrieval (ODV), PWLR and analysis smoother by averaging kernel (ANA_AK): • x’ = a + A ( t - a ) • Where a is the a priori profile from the PWLR, t is the supposed "true", A is the retrieval averaging kernel • Profiles smoothed/sampled to grid more closely matching expected vertical resolution (than 101 level RTTOV grid): • Temperature: 0, 1, 2, 3, 4, 6, 8, 10, 12, 14, 17, 20, 24, 30, 35,40,50 km. • Water vapour: 0, 1, 2, 3,4, 6, 8, 10, 12, 14, 17,20 km • Ozone: 0, 6, 12, 18, 24, 30, 40 km. • The grid is defined relative to the surface pressure / z*.

  29. Mid-lat land full retrieval: Measurements and residuals: IR only

  30. Mid-lat land full retrieval: Measurements and residuals: MWIR

  31. Mid-lat land full retrieval: Profile comparisons: IR only

  32. Mid-lat land full retrieval: Profile comparisons: MWIR

  33. Mid-lat ocean full retrieval: Measurements and residuals: IR only

  34. Mid-lat land full retrieval: Measurements and residuals: MWIR

  35. Mid-lat ocean full retrieval: Profile comparisons: IR only

  36. Mid-lat ocean full retrieval: Profile comparisons: MWIR

  37. Cost function + Number of iterations: IR only

  38. Cost function + Number of iterations: MWIR

  39. IR vs MWIR: Temperature

  40. IR vs MWIR: Water vapour

  41. Latitudedependence: MWIR

  42. Latitudedependence: IR only

  43. Viewdependence: MWIR only

  44. Viewdependence:IR only

  45. IR only

  46. MWIR

More Related