1 / 24

Verification of SREF Aviation Forecasts at Binghamton, NY

Verification of SREF Aviation Forecasts at Binghamton, NY. Justin Arnott NOAA / NWS Binghamton, NY. Motivation. Ensemble information making impacts throughout the forecast process Examples SREFs, MREFs, NAEFS, ECMWF Ensemble

paco
Télécharger la présentation

Verification of SREF Aviation Forecasts at Binghamton, NY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Verification of SREF Aviation Forecasts at Binghamton, NY Justin Arnott NOAA / NWS Binghamton, NY

  2. Motivation • Ensemble information making impacts throughout the forecast process • Examples SREFs, MREFs, NAEFS, ECMWF Ensemble • SREF resolution is reaching the mesoscale (32-45 km), a scale at which some aviation impacts may be resolvable • Can SREFs provide useful information in the aviation forecast process?

  3. SREF • 21 member multi-model ensemble • 10 ETA(32 km, 60 vertical levels) • 3 NCEP - NMM WRF (40 km, 52 vertical levels) • 3 NCAR - ARW WRF (45 km, 35 vertical levels) • 5 NCEP - RSM (45 km, 28 vertical levels) • Various IC/BCs, physical parameterizations

  4. SREF Aviation Forecasts • Numerous Aviation parameters are created from the 9 and 21Z simulations • For creating TAFs, CIG/VSBY fields may provide the most potential use • Some outputted directly, • some derived • Include: CIG, VSBY, icing, • turbulence, jet stream, shear, • convection, precipitation, • freezing level, fog, etc. http://wwwt.emc.ncep.noaa.gov/mmb/SREF/SREF.html

  5. SREF Aviation Forecasts • Verification of SREF CIG/VSBY forecasts has been minimal • Alaska Region has completed a study using SREF MEAN values • No verification study has been conducted over the lower 48

  6. ~40 km Expectations • CIGS/VSBYS, can vary greatly on scales far less than the 32-45 km scale of the SREFs • Some MVFR/IFR events are more localized than others • Summer MVFR/IFR tends to be more localized • Winter MVFR/IFR is typically more widespread • Bottom Line: Expect relatively poor SREF performance during the warm season, with improvements during the cool season

  7. The Study So Far… • Gather SREF CIG/VISBY data daily starting July 1, 2008 • Data provided specifically for the project by Binbin Zhou at NCEP • Compute POD/FAR/CSI/BIAS statistics for July-September at KBGM • MVFR and IFR (due to small sample size) • Investigate using different probabilities to base forecast on • 50%, 30%, 20%, 10%

  8. The Study So Far…continued • Compare SREF results to WFO Binghamton, NY and GFS MOS forecasts • Use stats-on-demand web interface to obtain this data

  9. Results • Very little MVFR/IFR at KBGM in July-September • IFR or MVFR only ~10% of the time • So, we’re aiming at a very small target!

  10. Results – MVFR/IFR CIGS

  11. Results – MVFR/IFR CIGS • WFO BGM/GFS MOS more skillful than the SREF mean or any SREF probability threshold • 30% probability threshold shows best skill • Large false alarm ratios with nearly all SREF forecasts • Large positive biases for SREF mean and nearly all probability thresholds • IE over forecasting MVFR/IFR CIGS

  12. Comparing Apples with Oranges? • These results compare 9-21 hr SREF forecasts with 0-6 hour WFO BGM forecasts and 6-12 hr GFS forecasts • Due to later availability of SREF data (9Z SREFS not available for use until 18Z TAFs) • How well does a 9-24 hr GFS MOS (or BGM) forecast perform? • 21 hr not available using stats-on-demand

  13. Results – MVFR/IFR CIGS

  14. Results – MVFR/IFR CIGS • WFO BGM / GFS MOS performance does not decrease substantially by changing the comparison time window

  15. Results – MVFR/IFR VSBYS

  16. Results MVFR/IFR VSBYS • SREF Mean as well as 30 and 20% thresholds fail to identify enough cases to be useful • 10% threshold shows greatest skill and is comparable to GFS MOS forecasts! • There is a significant positive bias at this threshold

  17. Results – IFR CIGS

  18. Results – IFR CIGS • SREF Mean poor at identifying IFR CIGS • CSI scores for SREF probability fields are an improvement on WFO BGM/GFS MOS • Bias scores indicate underforecasting at a 30% threshold but large overforecasting for 20,10% thresholds • WFO BGM/ GFS MOS tend to underforecast IFR CIGS

  19. Results – IFR VSBYS

  20. Results – IFR VSBYS • SREF can only readily identify IFR VSBY situations except at the 10% threshold • Tremendous biases indicate, however that these forecasts are not useful

  21. Summary • SREF performance occasionally comparable to GFS MOS  potentially useful guidance • Promising for “~direct” model output • Hampered by later arrival time at WFO • MEAN fields show little/no skill • Different probability thresholds show best skill for different variables/categories • CIGS: • SREFS frequently over forecast MVFR/IFR CIGS • SREFS perform surprisingly well with IFR CIGS • Best performing probability thresholds are 20-30% balancing BIAS with CSI

  22. Summary, continued • VSBYS: • SREFS have trouble identifying VSBY restrictions • 10% probability threshold necessary to get any signal, but this may be useful for MVFR/IFR (not IFR alone)

  23. Future Plans • Continue computing statistics through the upcoming cool season • Expect improved results given more widespread (i.e. resolvable) restrictions • Expand to other WFO BGM TAF sites • Work with NOAA/NWS/NCEP in improving calculations of CIG/VSBY

  24. Acknowledgements • Binbin Zhou – NOAA/NWS/NCEP • For providing access to SREF data in near real-time

More Related