1 / 25

Intercomparisons Working Groupe activities

Intercomparisons Working Groupe activities. Prepared by F. Hernandez L. Crosnier, N. Verbrugge, K. Lisaeter, L. Bertino, F. Davidson, M. Kamachi, G. Brassington, P. Oke, A. Schiller, C. Maes, J. Cummings and the MERSEA assessment group. Definition of metrics at the global level: where are we ?

qamar
Télécharger la présentation

Intercomparisons Working Groupe activities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intercomparisons Working Groupe activities Prepared by F. Hernandez L. Crosnier, N. Verbrugge, K. Lisaeter, L. Bertino, F. Davidson, M. Kamachi, G. Brassington, P. Oke, A. Schiller, C. Maes, J. Cummings and the MERSEA assessment group Definition of metrics at the global level: where are we ? Class 1, 2, 3 and 4 metrics definition Available observation and climatologies Implementation in practice Data servers, formats etc….. Plan for GODAE intercomparisons: what do we decide?

  2. Intercomparisons Working Groupe activities Definition of metrics at the global level: objectives in GODAE  We all define and use diagnostics to assess our models and forecasting systems, but this is not the point….. Thus, the purpose here is to define and test common ways to validate the systems in the framework of GODAE by Chosing a common methodology for validation (what are we looking at…) Defining a set of diagnostics (the « metrics ») Chosing common set of reference (climatologies, observations…) Then, promoting this work as standards

  3. The validation « philosophy » • Basic principles. Defined for ocean hindcast and forecast (Le Provost 2002, MERSEA Strand 1): • Consistency: verifying that the system outputs are consistent with the current knowledge of the ocean circulation and climatologies • Quality (or accuracy of the hindcast) quantifying the differences between the system “best results” (analysis)and the sea truth, as estimated from observations, preferably using independent observations (not assimilated). • Performance (or accuracy of the forecast): quantifying the short term forecast capacity of each system, i.e. Answering the questions “do we perform better than persistency? better than climatology?… • A complementary principal, to verify the interest for the customer (Pinardi and Tonani, 2005, MFS): • Benefit: end-user assessment of which quality level has to be reached before the product is useful for an application

  4. Metrics definition (MERSEA heritage) CLASS1 like : Regular grid and few depth, daily averaged • Comparison of the 2D model surface SST and SLA with -SST -SLA -SSM/I Ice concentration and drift for Arctic and Baltic areas • Comparison of each model (T,S) with climatological (T,S, mixed layer depth) at several depth (0m, 100m, 500m, 1000m )? • CLASS2 like:High resolution vertical sections and moorings • Comparison of the model sections with Climatology and WOCE/CLIVAR/OTHER/XBT hydrographic sections • Comparison of the model SLA at tide gauge location, of the model (T,S,U,V) at fixed mooring locations • CLASS3 like:Physical quantities derived from model variables • Comparison of the model volume transport with available observations (Florida cable measurments….) • Assessment through integrated/derived quantities: Meridional Overturning Circulation, Warm Water Heat Content etc…. • CLASS4 like: Assessment of forecasting capabilities • Comparison between climatology, forecast, hindcast, analysis and observations • Comparison in 15x15degree boxes/dedicated boxes of each model with • T/S CORIOLIS, SSM/I Sea Ice concentration, tide gauges • SST High resolution ? • SLA AVISO ?

  5. Metrics definition over the world ocean MERSEA BLUELink GODAE Workshop

  6. 1/2° Agreement on Class 1 regional files T, S, U, V, SSH, MLD, BSF, Tx, Ty, Qtot+relax., E-P-R +relax., MDT(MSSH) 1/4°

  7. 1/2° 1/6° 1/6° 1/6° 1/6° 1/6° T, S, U, V, SSH, MLD, BSF, Tx, Ty, Qtot+relax., E-P-R +relax., MDT(MSSH)

  8. 1/2° 1/8° 1/6° or 1/8° T, S, U, V, SSH, MLD, BSF, Tx, Ty, Qtot+relax., E-P-R +relax., MDT(MSSH) + Sea Ice variables and fluxes

  9. Assessment through Class 1 metrics • Consistency: Monthly averaged fields compared to: • WOA’2005, Hydrobase, CARS, MEDATLAS, Janssen, climatologies • De Boyet Montégut MLD climatology • SST climatology ? • Quality: Daily fields compared to • Dynamic topography, or SLA (AVISO products) • SST (to be determined) • SSM/I Sea-Ice concentration and drift products • Surface currents (DBCP data, OSCAR, SURCOUF products) • Performance: • Class 1 analyses, hindcast, forecast can be compared • Class 1 format can also be used to store assimilation quantities: innovation, residual vectors

  10. MOORINGS XBT lines SECTIONS and TRANSPORT WOCE CLIVAR CANADIAN SECTIONS SOOP GLOSS TAO PIRATA MFS MODEL T XBT Observed T MODEL/WOCE-CLIVAR SECTION VOLUME TRANSPORT across FLORIDA Strait : MODEL/CABLE Comparison MODEL/OBS comparison Model/Tide gauge SLA time series Comparison OceanSITES moorings Class 2/3: MERSEA/GODAE GLOBAL METRICS: Online Systematic Diagnostics

  11. Re-visiting class 2/3 metrics in the North Atlantic C-NOOF already started comparisons with AZMP sections

  12. Revised proposition of M. Kamachi ongoing work with C. Maes and M. Kamachi ongoing work with BLUElink and SPICE people Still need to be discussed (IRD, Peru, Chile contacts) Class 2/3 metrics: proposed definitions MERSEA first proposition

  13. Class 3 metrics • Already defined or ready to be implemented: • Transport computation discussed • MOC • Sea-Ice volume, extent • What else can be implemented ? (in-line of off-line) in relation with GSOP? • Monitoring of western boundary currents (Kuroshio and Gulf Stream path, axis) • Heat content of specific water masses (tropical areas, Warm Water Heat Content) • Mesoscale monitoring by regions: EKE timeseries, SLA spectrum • Tropical dynamics monitoring: Nino boxes, SLA/SST Howmuller diagrams • Water masse distribution, T-S diagrams • Lagrangian statistics, particle dispersion….

  14. Assessment through Class 2/3 metrics • Consistency: Class2 sections, moorings, monthly averaged fields compared to: • WOA’2005, Hydrobase, CARS, MEDATLAS, Janssen, climatologies • De Boyet Montégut MLD climatology • Quality: Daily fields at sections/moorings compared to • T/S in-situ (XBT, Argo, tropical moorings etc… • Sea-Ice data (OSI SAF) • Tide gauges • ADCP current • Performance: • Analyses, hindcast, forecast can be compared at sections and mooring locations

  15. Comparison to Global Tide gauges SLA

  16. Class 4 metrics, concept and implementation Class 1, 2 and 3 metrics can be applied to any field produced by the forecasting system (hindcasts, nowcasts or forecasts). More specifically, Class 4 metrics aims to measure the performance of the forecasting system, its capability to describe the ocean (hindcast mode), as well as its forecasting skill (analysis and forecast mode) at once. All fields are evaluated using identical criteria. From the assimilation point of view, these diagnostics are performed in the observational space.

  17. + + + + + + + + + + + + + + + + + + Class 4 metrics, concept and implementation • Truth • Climatology One model variable • Initial conditions (previous analysis) • Forecast • Persistency • Observations • Analysis • Hindcast time windowto compute stats T0 Time T0+7

  18. (Tobs-Tmod)2 0-100m (Sobs-Smod)2 0-100m (Sobs-Sclim)2 0-100m (Tobs-Tclim) 0-100m (Tobs-Tclim)2 0-100m Here, Only BA and PF files (No TE neither MO files)

  19. Compute Class4 statistics • per geographical boxes or in regular 5x5degree boxes • per vertical layers (0-100m, 100-500m, 500-5000m?) Elementary box patchwork

  20. Class 4 based on Sea-Ice in the Barents Sea TOPAZ sea-ice vs SSM/I data. RMS of the ice concentration error (model-observation) over a box in the Arctic Ocean. Analysis is compared to forecast and persistence over a 10-day window

  21. Assessing the performance of the system • Class 1, 2, and 3 can be used if applied on forecast, persistency etc….. • Class 4 define in the “observation space”: • Use observations and compute differences with model fields • T/S, sea level at tide gauges, OSI SAF sea ice • Define “share-able dataset is mandatory” • SST ? Altimetry ? Surface drifters ? • Compute statistics of differences per boxes (typically every week) • Compare these statistics to infer the performance • Possible diagnostics in the “model space”: not defined !

  22. Implementation in practice • Alle these metrics need: • Similar implementation • Convention for name, format (NetCDF COARDS CF) • Data servers for exchanges (FTP, OpenDAP)

  23. GODAE metrics definition: summary • Class 1 and 2 definition and technical implementation guidelines finished by the end of 2007 • Class 3: only transport and MOC fully defined by the end of 2007 • Any other diagnostics to be included? • Class 4 definition and technical implementation guideline for T/S and Sea-Ice finished by the end of 2007 • Tide gauges, SST, and Surface Velocity need agreements on observation data set • Nothing defined in the “state space”

  24. Intercomparisons Working Groupe activities Definition of metrics at the global level: where are we ? Class 1, 2, 3 and 4 metrics definition Available observation and climatologies Implementation in practice Data servers, formats etc….. Plan for GODAE intercomparisons: what do we decide?

  25. GODAE Intercomparison Working Group • Decide intercomparison exercice at IGST XII meeting ? • Objectives ? (internal, dedicated to large publicity….) • “Who”, “When”, “How” ? • Plans for implementation • Dedicated distributed archive: OpenDap • What are the possibilities for “ocean assessment”: • Intercomparison on hindcast (or reanalysis) over a period in the past • Off line comparison of operational systems (like in MERSEA) • Real-time comparison of operational systems for a given period • Other possibility: assessing the system in operation: • Complementary use of Key Parameters Indicators to verify the technical efficiency of the system • Other possibilty: looking for user feed-backs (but then no need for all the metrics already defined !)

More Related