1 / 42

Experience learned and recommendations from SST: AVHRR (METOP/NOAA) and MODIS/VIIRS

Experience learned and recommendations from SST: AVHRR (METOP/NOAA) and MODIS/VIIRS. Anne O’Carroll (EUMETSAT) Pierre Le Borgne (CMS, Meteo -France) Peter Minnett (RSMAS) Sasha Ignatov (NESDIS). Introduction. What is your cal/val experience?

connor
Télécharger la présentation

Experience learned and recommendations from SST: AVHRR (METOP/NOAA) and MODIS/VIIRS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experience learned and recommendations from SST: AVHRR (METOP/NOAA) and MODIS/VIIRS Anne O’Carroll (EUMETSAT) Pierre Le Borgne (CMS, Meteo-France) Peter Minnett (RSMAS) Sasha Ignatov (NESDIS)

  2. Introduction • What is your cal/val experience? • O’Carroll: validation of AATSR, IASI using buoy network; inter-satellite/algorithm comparisons • Le Borgne: • Minnett: validation of AVHRR, MODIS, AATSR, SEVIRI, VIIRS using ship-based radiometers. • Ignatov: Cal/Val, Self/Cross-Consistency Checks of SSTs & Radiances (AVHRR, MODIS, VIIRS, SEVIRI) • What will you cover in this presentation? • AVHRR L1, AVHRR L2, MODIS/VIIRS

  3. Calibration: Experience learned and recommendations from AVHRR Anne O’Carroll, Jörg Ackermann, Dieter Klaes EUMETSAT

  4. EPS calibration and validation objectives EPS calibration and validation overall plan EUM/EPS/SYS/PLN/02/004 is available from http://www.eumetsat.int/Home/Main/Satellites/Metop/Resources/index.htm?l=en • There are three major objectives for the EPS Calibration and Validation (including for Metop-AVHRR): • To generate validated data in a timely manner to meet the user requirements and achieve the overall mission objectives • To achieve state-of-the-art performance and accuracy from the EPS instruments and their generated data products • To ensure consistency and continuity of the Metop-B products with the Metop-A ones

  5. Recommendations and constraints : • Converting raw instrument data into geo-located products (in terms of physical units). • Monitoring instrument data, products, and parameters input to the calibration processes. • Determination of the quality (precision and accuracy), of generated products at levels 1&2. • Wherever possible, product quality shall be traced to common standards e.g. NIST. • This quality must be guaranteed over the mission lifetime. • Wherever possible, the product quality should be assessed by multiple independent means. • Data should be cross-calibrated to demonstrate consistency. • Testing and revision of processing databases to ensure that the quality of the products first meet the user requirements thresholds, and eventually meet or exceed the user requirements objectives.

  6. Cal/Val approach for AVHRR • Pre-Launch: • Check tools (prototypes, SIOV tools, etc.) • Coordinate and agree instrument parameters (with NOAA), one year before launch • Test the tools with parameters and Metop-A data and Rehearsal of Cal/Val activities • Commissioning and Routine Operation (similar to Metop-A): • Level 1 Product Verification, Checking and Validation - Prototype processing • Instrument monitoring • Level 1 Product Verification, Checking and Validation: • Prototype processing • Pixel-by-pixel checking of • Calibrated radiances for all channels; Geo-location • Satellite and Solar azimuth and elevation • Surface type, Surface altitude, Cloud cover (if relevant), Noise Levels • External Partner monitoring (ECMWF)

  7. Lessons learnt from Metop-A The “Science and product validation team” (SPVT) organisation (inside EUMETSAT) has ensured efficient team work on complex processing, cal and val issues. In order to make efficient use of scarce resources, Cal/Val plans shall concentrate on the mandatory tests for establishing instrument and product performance. Cal/Val testing shall use well validated, stable operational product processors, to avoid delays. The architecture shall ensure that the proper tools and data actually useful for Cal/Val testing are available to the teams. Instrument and product performance monitoring activities are essential for product operations. Users should receive products as early as possible during the Cal/Val phase. Dedicated campaigns with third party investigators have played an important role in product validation.

  8. Validation: Experience learned and recommendations from METOP/AVHRR Validation: Experience learned and recommendations from METOP/AVHRR Anne Marsouin, Sonia Péré, Pierre Le Borgne Météo-France/DP/CMS

  9. Outline • Validation against buoy -> building a MDB • METOP/AVHRR validation results (overview) • Buoy measurements may be erroneous -> Blacklist • Global statistics are not completely informative -> Regions

  10. Match-up Data Base (MDB) MDB files gather coincident satellite and in situ data • In situ measurements from GTS • Automatic processing with a 5-day delay • Satellite data overa box centered on the measure location (box size: 21x21 polar orbiter and 5x5 geostationary) • |tinsitu – tpixel| < Dtime (3h polar orbiter and 1h for geostationary) • >10% clear pixels in validation box

  11. METOP/AVHRR global validation results using blacklist filtered insitu measurements

  12. ? Real time results in 2009 Large errors in the N Pacific, impacting global results

  13. Nighttime results From 21/08 to 31/08/2009: Raw results Set of Technocean buoys launched At the end of July 2009 Showing a 2-3K negative bias Need for a blacklist!

  14. Buoy blacklist: principles Building buoy blacklist • Night time data only • N satellites • Process two successive 10-day periods buoy blacklisted if |bias| > 1.5C for 2 satellites on 1 period or 1 satellite on both periods • Automatically updated • Interactive control every 3 months

  15. no blacklist with blacklist Example of blacklisted buoys 3 erroneous buoys 13547, 13548, 13551

  16. METOP/AVHRR only Adding the OSI SAF geostationary products (MSG, GOES-E) Number of blacklisted buoys

  17. METOP results in 2010 ? Night time cases METOP-2 April to July 2010 Need for regional statistics in zones adapted to known algorithmic problems

  18. Zoom over spring 2010 Large errors in spring 2010 in the tropical Atlantic !

  19. Summary • Insitu data remain the validation source • However erroneous buoys must be blacklisted • Global statistics may hide serious regional problems • Conditions must be identified that cause algorithmic problems

  20. Calibration/Validation: Experience learned and recommendations from MODIS/VIIRS, and othersensors Peter Minnett, Miguel Angel Izaguirre, Elizabeth Williams RSMAS, University of Miami. Michael Reynolds, RMRCo, Seattle.

  21. Cal/Val examples Noyes, E. J., P. J. Minnett, J. J. Remedios, G. K. Corlett, S. A. Good, and D. T. Llewellyn-Jones, 2006: The Accuracy of the AATSR Sea Surface Temperatures in the Caribbean. Remote Sensing of Environment, 101, 38-51. Use of ship-board radiometers to validate satellite SSTs from MODIS, (A)ATSR, SEVIRI and VIIRS. Removes sources of uncertainty in the comparison caused by variability in near-surface temperature gradients (skin effect and diurnal heating and cooling).

  22. Lessons Learned: Positive M-AERI’s on research ships ISAR on NYK vessel Andromeda Leader Use of ship-board radiometers for validating satellite-SSTs. Over time full range of atmospheric and oceanic variability can be sampled.

  23. Lessons Learned: Positive M-AERI on: Allure of the Seas, starting 2012; Explorer of the Seas, 2000-2006, restarting in 2012. Use of commercial cruise liners provides a cost-effective mechanism for generating long time-series of radiometric measurements of skin SST, often along repeating tracks.

  24. Lessons Learned: Positive Minnett, P. J. and G. K. Corlett, 2012: A Pathway to Generating Climate Data Records of Sea-Surface Temperature from Satellite Measurements. Deep-Sea Research II. Accepted. SI-traceability of ship-board radiometers permits generation of Climate Data Records

  25. Recommendations • High-accuracy ship-board radiometers to provide SI-traceable skin SST measurements. • Mechanism to provide continuing assessment of SI-traceability and accuracy of calibrating facilities. • Access to (quality-assured) independent SST data from drifters, profilers and moorings. • Models and forcing functions for skin-layer and diurnal heating corrections to derive an estimate of the Skin SST. Evaluation of the accuracy of these modelled corrections.

  26. Cal/Val tools Data base and tools to provide rapid generation of “Match-ups” between satellite and validating measurements; also between SSTs fields from multiple satellites. Tools to analyse the matchups to determine satellite-SST retrieval uncertainties, and their dependences on controlling parameters.

  27. Sentinel-3 Cal/Val Planning Meeting20 – 22 March 2012, Frascati, Italy Online Near-Real Time SST Monitoring at NESDIS Sasha Ignatov, Prasanjit Dash, XingMing Liang, Feng Xu NOAA/NESDIS & CIRA Acknowledgements: J. Sapper, J. Stroup, Y. Kihai, B. Petrenko, K. Saha, M. Bouali Funding Support: JPSS, GOES-R, NOAA (PSDI and NDE)

  28. Approach: Monitor SSTs and BTs online in NRT for Stability, Accuracy, Self & Cross-Consistency SST Quality Monitor (SQUAM) www.star.nesdis.noaa.gov/sod/sst/squam/ • Global validation against various L4s and in situ SST • Double-Differences (Cross-Platform & Product Consistency) In situ SST Quality Monitor (iQuam) www.star.nesdis.noaa.gov/sod/sst/iquam/ • QC in situ SST (drifters, moorings, ships) • Web: Display summary statistics & Distribute QC’ed data to users Monitoring IR Clear-sky Radiances over Oceans for SST (MICROS) http://www.star.nesdis.noaa.gov/sod/sst/micros/ • Monitor clear-sky ocean Brightness Temperatures vs. CRTM • Check for consistency with AVHRR/MODIS using Double-Differencing

  29. SQUAM L2 Products (IDPS VIIRS and MO(Y)D28 are being added) GAC ~4km Global Area Coverage NAVO SEATEMP U. Miami + NODC Pathfinder Ocean (L3P) NESDIS ACSPO (new); MUT (heritage) MetOp FRAC ~1km Full Resolution Area Coverage MODIS ~1km Terra and Aqua VIIRS ~1km NPP U. Miami MO(Y)D28 IDPS EUMETSAT O&SI SAF NESDIS ACSPO NESDIS ACSPO NESDIS ACSPO - Monitor community L2 SSTs online in NRT vs. L4s and in situ - Evaluate for Stability, Accuracy, Self- and Cross-Consistency

  30. http://www.star.nesdis.noaa.gov/sod/sst/squam/

  31. ACSPO VIIRS SST minus OSTIA (Nighttime) Histograms of Δs are near-Gaussian and centered at zero

  32. STD “AVHRR minus OSTIA” (Daytime) L2-L4 statistics are automatically trended in near-real time

  33. STD “AVHRR minus Drifters” (Daytime) Similar analyses are performed in L2 minus in situ space

  34. Mean “O&SI SAF minus OSTIA” (Daytime) Hovmoller diagrams updated in near-real time

  35. http://www.star.nesdis.noaa.gov/sod/sst/iquam/

  36. iQuam QC is Consistent with UK Met Office (*) Lorenc and Hammon, 1988; Ingleby and Haddleston, 2007

  37. MICROS : End-to-end system www.star.nesdis.noaa.gov/sod/sst/MICROS/

  38. M-O Biases and Double Differences • Model minus Observation (“M-O”) Biases • M (Model) = Community Radiative Transfer Model (CRTM) simulated TOA Brightness Temperatures (w/ Reynolds SST, GFS profiles as input) • O (Observation) = Clear-Sky sensor (AVHRR, MODIS, VIIRS) BTs • Double Differences (“DD”) for Cross-Platform Consistency • “M” used as a “Transfer Standard” • DDs cancel out/minimize effect of systematic errors & instabilities in BTs arising from e.g. • Errors/Instabilities in Reynolds SST & GFS • Missing aerosol • Possible systemic biases in CRTM • Updates to ACSPO algorithm

  39. AVHRR M-O Biases @11 µm M-O Biases change in time but remain largely consistent between platforms

  40. AVHRR Double Differences @11 µm (Ref=Metop-A) Double Differences emphasize cross-platform BT (in)consistencies NOAA-16 shows anomalous behavior. Other platforms show cross-platform systematic biases of several hundredths-to-tenths of a Kelvin

  41. NESDIS Summary • At NESDIS, emphasis is on global, online, near-real time diagnostics for • Uniformly QC’ed in situ SSTs (iQuam) • Satellite SSTs, from all data producers (SQUAM) • SST Clear-Sky Ocean Radiances, to attribute SST anomalies (MICROS) • SQUAM monitors satellite L2s SSTs against • Uniformly QC’ed in situ SSTs (from iQuam) (Heritage VAL) • Global L4 fields (OSTIA, Reynolds, etc) (Self-Consistency Checks) • Cross-evaluated for consistency (Cross-Consistency Checks) • MICROS monitors SST Radiances • Against RTM with first-guess SST/upper air fields • Useful for VAL of SST, RTM, first guess fields, sensor radiances

  42. Summary of recommendations • Cal/val plans must concentrate on mandatory tasks • Access to proper tools/data; products available early as possible • Instrument and product performance monitoring • In situ data remain the validation source • - High-accuracy ship-board radiometers to provide SI-traceable skin SST measurements. • - Access to (quality-assured) independent SST data from drifters, profilers and moorings. • Erroneous buoys must be blacklisted • Global statistics may hide serious regional problems • Conditions must be identified that cause algorithmic problems

More Related