1 / 9

Observing System Evaluation (OSEval) Breakout Session

Observing System Evaluation (OSEval) Breakout Session. Peter Oke, Gilles Larnicol Santa Cruz, California, USA, 13-17 June 2011. Discussion Topics. Routine OSEs – assess status and how to improve them; Techniques for OSE/OSSEs; Is the methodology sound? Community nature run for OSSEs;

joeli
Télécharger la présentation

Observing System Evaluation (OSEval) Breakout Session

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Observing System Evaluation (OSEval) Breakout Session Peter Oke, Gilles Larnicol Santa Cruz, California, USA, 13-17 June 2011

  2. Discussion Topics • Routine OSEs – assess status and how to improve them; • Techniques for OSE/OSSEs; Is the methodology sound? • Community nature run for OSSEs; • Quantifying the impact of assimilation “bad data”? How do we document bad data (black lists, reports to data providers)? • What should be the null hypothesis? Observations are useful unless demonstrated otherwise - or - observations are not useful unless demonstrated? • How to measure impact? Skill scores only, or process studies? Assessing impact of new and future observing systems (planned experiments); • How can the scientific/operational community reach the “policy makers”? What is the role of GOV? ETOOFS? Etc. • Recent literature review

  3. Routine Evaluation: Near-Real-Time OSEs • Dan Lea – NRT OSEs were time-consuming, but UKMet intends to continue performing the NRT OSEs • Fabrice Hernandez – Mercator intends to begin performing NRT OSEs for one forecast cycle each month, starting in the second half of 2011. • Gary Brassington – Bureau of Meteorology intends to begin to perform NRT OSEs, starting in the second half of 2011. • Yosuke Fujii – JMA does not have the resources to perform NRT OSEs • Pat Hogan/Jim Cummings – NRL tend to regularly perform specialised OSEs to address specific “events”. NRL does not have the human resources to analyse NRT OSEs. But if the required metrics were clearer and toolboxes available, NRL could consider performing these community OSEs. • Mario Adani - Medittereanean Sea forecast system cannot contribute • Clamente Tanjura - Brazilian system is not ready to contribute to NRT OSEs yet. • Jiang Zhu – Chinese system intends to contribute next year. • Chris Edwards – California Current System cannot resource this yet. May be able to contribute in 2012. • Alexandre Kurapov - Oregon coast system performs delayed-mode OSEs, but not NRT OSEs. • Gregory Smith – Canadian system could consider participating starting in 2012. • Balakrishnan Nair – Indian system not ready yet. • TOPAZ – probably cannot afford. • Discussion about what fields to store, and how to evaluate each system. It is agreed that common metrics and common tools is something we should aim for. • Suggest a cycle (7 or 8-months repeat cycle) • Need to maintain an email list of participating groups/contacts so that we can modify our schedule in response to observing system events (e.g., when a new data stream, like CRYOSAT, become available.

  4. Observing System Experiments and Observing System Simulation Experiments • Is the methodology sound?

  5. Community nature run for OSSEs • Requirements: • Realistic variability: • Good representation of climatology (T/S, MSL) • Good representation of variance (SLA, SST) • Long run – how long? • High-resolution • Global ? • Question: Is it wise to spend our limited resources performing OSSEs? • Response: perhaps the funding agencies will want to see efforts to optimise the design of observing systems.

  6. Impact of bad data • Quantifying the impact of assimilation “bad data”? • How do we document bad data? Black lists, reports to data providers? What is current practice? • When NOAA detect “bad” satellite data, they report the error to Observational agency • Mercator: Argo – black list; SLA – Informal communication to Aviso provider • NRL: Compile own “reject list” that could be shared • BoM: Much learned from QC inter-comparison activity • Argo – JCOMM OPS (Mathieu Belbeoch) have indicated a willingness to receive information. • Under MyOcean, feedback on all in situ data is planned to be disseminated to Coriolis – not JCOMM OPS. • Non-European group could follow this approach – but this will not work for data accessed from the GTS or US GODAE. • Action: Gary Brassington to determine format of rejected data to JCOMM OPS. • Plan: each forecast system should commit to maintaining their own reject list. • JCOMM OPS intend to maintain a list of data counts of what’s assimilated from each forecast centre. • Search for NOAA iquam, NOAA squam for protocol for satallite data.

  7. Null hypothesis • What should be the null hypothesis? • Observations are useful unless demonstrated otherwise • Observations are not useful unless demonstrated

  8. Evaluation • How to measure impact? Skill scores only, or process studies? Assessing impact of new and future observing systems (planned experiments);

  9. Dissemination of results (covered in final breakout session) • How can the scientific/operational community reach the “policy makers”? • What is the role of: • GOV? • ETOOFS? • Etc.

More Related