1 / 22

NOAA Testbeds

Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment.

grady-fox
Télécharger la présentation

NOAA Testbeds

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Objective Evaluation of Aviation Related Variables during 2010 Hazardous Weather Testbed (HWT) Spring Experiment Tara Jensen1*, Steve Weiss2, Jason J. Levit3, Michelle Harrold1, Lisa Coco1, Patrick Marsh4, Adam Clark4, Fanyou Kong5, Kevin Thomas5, Ming Xue5, Jack Kain4, Russell Schneider2, Mike Coniglio4 , andBarbara Brown1 1 NCAR/Research Applications Laboratory (RAL), Boulder, Colorado2 NOAA/Storm Prediction Center (SPC), Norman, Oklahoma3 NOAA/Aviation Weather Center (AWC), Kansas City, Missouri 4 NOAA/National Severe Storms Laboratory (NSSL), Norman, Oklahoma5 Center for Analysis and Prediction of Storms (CAPS), University of Oklahoma, Norman, Oklahoma

  2. NOAA/ ESRL/ GSD NCAR/ RAL/ JNT NOAA Testbeds Funded by: NOAA, USWRP, AFWA, NCAR Bridge between Research And Operations Community Code Support Testing and Evaluation Verification Research Distributed Facility with 23 staff members at either NOAA/ESRL and NCAR/RAL and 2 staff at NOAA/NCEP

  3. HWT-DTC Collaboration Objectives SupplementHWT Spring Experiment subjective assessments with objective evaluationof experimental forecasts contributed to Spring Experiment Expose the forecastersand researchers to both traditional and new approachesfor verifyingforecasts Further DTC Mission of Testing and Evaluationof cutting edge NWP for R2O.

  4. 2010 Models 2/3 CONUS VORTEX2 DAILY Region Of Interest (Moved Daily) Obs were NSSL Q2 data CAPS Storm-Scale Ensemble – 4km (all 26 members plus products) CAPS deterministic – 1 km SREF Ensemble Products – 32-35 km NAM – 12 km HRRR – 3 km NSSL – 4 km MMM – 3 km NAM high-res window – 4km

  5. MODELS Traditional Statistics Output DTC Model Evaluation Tools (MET) OBS Web Spatial* Statistics Output REGIONS *Spatial = Object Oriented General Approach for Objective Evaluation of Contributed Research Models

  6. Statistics and Attributescalculated using MET Traditional (Categorical) Object-Oriented from MODE Gilbert Skill Score (GSS - aka ETS) Critical Success Index (CSI - aka Threat Score) Frequency Bias Prob. of Detection (POD) False Alarm Ratio (FAR) Centroid Distance Area Ratio Angle Difference Intensity Percentiles Intersection Area Boundary Distancebetween matched forecast and observed object pairs Etc…

  7. HWT 2010 Spring Experiment Severe QPF Aviation Probability of Severe: Winds Hail Tornadoes Probability of Extreme: 0.5 inches in 6hrs 1.0 inches in 6 hrs Max accumulation Probability of Convection: Echos > 40 dBZ Echo Top Height >25 kFt, >35 kFt Evaluation: Traditional and Spatial Evaluation: Traditional and Spatial Evaluation: Traditional and Spatial REFC 20, 25, 30, 35, 40, 50, 60 dBZ APCP and Prob. 0.5, 1.0, 2,0 inches In 3h and 6h RETOP 25, 30, 35, 40, 45 kFT

  8. Preliminary Results

  9. Caveats Please consider these results preliminary • 25 samples of 00z runs– not quite enough to assign statistical significance • Aggregations: • Represent the median of the 25 samples (17 May – 18 Jun 2010) • Generated using alpha version of METviewer database and display system

  10. Object Definition

  11. Use of Attributes of Objects defined by MODE Observed Field Forecast Field Centroid Distance: Provides a quantitative sense of spatial displacement of cloud complex. Small is good Axis Angle: Provides an objective measure of linear orientation. Small is good Obs Area Area Ratio: Provides an objective measure of whether there is an over- or under- prediction of areal extent of cloud. Close to 1 is good Area Ratio = Fcst Area Obs Area Fcst Area

  12. Use of Attributes of Objects defined by MODE Observed Field Forecast Field Obs P50 = 26.6 P90 = 31.5 Symmetric Difference: Non-Intersecting Area Fcst P50 = 29.0 P90 = 33.4 P50/P90 Int: Provides objective measures of Median (50th percentile) and near-Peak (90th percentile) intensities found in objects.Ratio close To 1 is good Symmetric Diff: May be a good summary statistic for how well Forecast and Observed objects match. Small is good Total Interest: Summary statistic derived from fuzzy logic engine with user-defined Interest Maps for all these attributes plus some others. Close to 1 is good Total Interest 0.75

  13. Example: Radar Echo Tops1 hr forecast valid 9 June 2010 – 01 UTC RETOP NSSL Q2 Observed HRRR CAPS Mean CAPS 1km Observed Objects Matched Object 1 Matched Object 2 Unmatched Object

  14. Example: Radar Echo Tops1 hr forecast valid 9 June 2010 – 01 UTC RETOP NSSL Q2 Observed HRRR CAPS Mean CAPS 1km Observed Objects Matched Object 1 Matched Object 2 Unmatched Object

  15. Example: Radar Echo Tops1 hr forecast valid 9 June 2010 – 01 UTC RETOP NSSL Q2 Observed HRRR CAPS Mean CAPS 1km 27.06 km 1.56 1.17 1372 gs 4.13 1.00 24.56 km 5.83 deg 2.77 2962 gs 4.13 0.93 30.52 km 5.87 deg 2.48 2735 gs 4.13 0.94 Centroid Distance: Angle Diff: Area Ratio: Symmetric Diff: P50 Ratio: Total Interest:

  16. Example: Radar Echo TopsEnsemble Mean not always so useful Observed CAPS Mean Thompson WSM6 WDM6 Morrison RETOP

  17. Traditional Stats – GSS (aka ETS) CAPS Ensemble Mean CAPS 1 km Model CAPS SSEF ARW-CN(control w/ radar assimilation) 3 km HRRR 12km NAM CAPS SSEF ARW-C0(control w/o radar assimilation)

  18. Traditional Stats – Freq. Bias CAPS Ensemble Mean CAPS 1 km Model CAPS SSEF ARW-CN(control w/ radar assimilation) 3 km HRRR 12km NAM CAPS SSEF ARW-C0(control w/o radar assimilation)

  19. MODE Attributes – Area Ratio

  20. MODE Attributes – Symmetric Diff

  21. Summary • 30 models and 4 ensemble products evaluated during HWT 2010 • Most models had reflectivity as a variable • 3 models had Radar Echo Top as a variable (HRRR, CAPS Ensemble, CAPS 1km) • All models appears to over predict RETOP areal coverage by at least a factor of 2-5 based on FBIAS and a factor of 5-10 based on MODE Area Ratio • Based on some Traditional and Object-Oriented Metrics: HRRR appears to have a slight edge over CAPS simulations for RETOP during the 2010 Spring Experiment but the differences are not statistically significant • The Ensemble post-processing technique (seen in Ensemble Mean) seems to inflate the over-prediction of areal extent of cloud shield to a non-useful level. Additional Evaluation of Probability of Exceeding 40 dBZ is planned for later this winter.

  22. Thank Yous… Questions? DTC would like to thank all of the AWC participants who helped improveour evaluation through their comments and suggestions. Evaluation: http://verif.rap.ucar.edu/hwt/2010 MET: http://www.dtcenter.org/met Email: jensen@ucar.edu Support for the Developmental Testbed Center (DTC), is provided by NOAA, AFWA NCAR and NSF

More Related