1 / 19

The Risks and Rewards of High-Resolution and Ensemble Modeling Systems

The Risks and Rewards of High-Resolution and Ensemble Modeling Systems. David Schultz NOAA/National Severe Storms Laboratory Paul Roebber University of Wisconsin at Milwaukee Brian Colle State University of New York at Stony Brook David Stensrud NOAA/National Severe Storms Laboratory

cahil
Télécharger la présentation

The Risks and Rewards of High-Resolution and Ensemble Modeling Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Risks and Rewards of High-Resolution and Ensemble Modeling Systems David Schultz NOAA/National Severe Storms Laboratory Paul Roebber University of Wisconsin at Milwaukee Brian Colle State University of New York at Stony Brook David Stensrud NOAA/National Severe Storms Laboratory http://www.nssl.noaa.gov/~schultz

  2. Objectives of this Talk • Discuss issues for operational weather forecasting in going to higher-resolution NWP. • Briefly compare advantages and disadvantages of high-resolution simulations versus lower-resolution ensembles. • Example: 3 May 1999 Oklahoma tornado outbreak. • Discuss unresolved scientific issues that will lead to improving predictability for operational forecasters.

  3. High-Resolution NWP • High resolution (< 6 km) is now possible in real time due to increasing computer power and real-time distribution of data from National and International Modeling Centres. • Many groups have demonstrated high-resolution real-time NWP (Mass and Kuo 1998). • Small-scale weather features are able to be reproduced by high-resolution models (e.g., sea breezes, orographic precipitation, frontal circulations, convection).

  4. But, . . . • The use of models to study physical processes and to make weather forecasts are two distinctly different applications of the same tool. • No guarantee that a high-resolution model will be more useful to forecasters than a model with larger grid spacing. • Model errors may increase with increasing resolution, as high-resolution models have more degrees of freedom. • High-resolution models may produce wonderfully detailed, but inaccurate, forecasts.

  5. Ensemble Modeling Systems • Ensembles of lower-resolution models can have greater skill than a single higher-resolution forecast (e.g., Wandishin et al. 2001; Grimit and Mass 2001). • Ensemble forecasts directly express uncertainty through their inherently probabilistic nature. • But, what is the minimum resolution needed for “accurate” simulations? • How to best construct an ensemble?

  6. The Forecast Process • Hypothesis Formation • Forecaster develops a conceptual understanding of the forecast scenario (“problem of the day”) • Hypothesis Testing • Forecaster seeks “evidence” that will confirm or refute hypothesis • observations, NWP output, conceptual models • Continuous process • Prediction • Forecaster conceptual model of forecast scenario(s) (e.g., Doswell 1986; Doswell and Maddox 1986; Hoffman 1991; Pliske et al. 2003)

  7. Intuitive Forecasters • Defined by Pliske et al. (2003) as those who construct conceptual understanding of their forecasts on the basis of dynamic, visual images (as opposed to “rules of thumb”). • Such forecasters would benefit from both high-resolution forecasts and ensembles. • Show detailed structures/evolutions not possible in lower-resolution models • Developing alternate scenarios from ensembles • Construct probabilistic forecasts

  8. 3 May 1999 Oklahoma Outbreak • 66 tornadoes, produced by 10 long-lived and violent supercell thunderstorms • 45 fatalities, 645 injuries in Oklahoma • ~2300 homes destroyed; 7400 damaged • Over $1 billion in damage, the nation’s most expensive tornado outbreak (Jarboe) (Schultz) (Daily Oklahoman)

  9. Moore• Observed radar imagery (courtesy of Travis Smith, NSSL) 2-km MM5 simulation initialized 25 hours earlier (no data assimilation) pink: 1.5-km w (> 0.5 m/s) blue: 9-km cloud-ice mixing ratio (>0.1 g/kg) Moore• 0131 UTC 0100 UTC 0221 UTC 0200 UTC

  10. •Moore Stage IV Radar/Gauge Precip. Analysis (Baldwin and Mitchell 1997)

  11. Modeled Storms as Supercells • Identify updrafts(> 5 m/s) correlated with vertically coherent relative vorticity for at least 60 minutes • 22 supercells, 11 of which are on OK–TX border

  12. Observed vs Modeled Supercells

  13. Ensembles (Stensrud and Weiss) • 36-km MM5 simulations initialized 24 h ahead • Six members with varying model physics packages: 3 convective schemes (Kain–Fritsch, Betts–Miller–Janjic, Grell) and 2 PBL schemes (Blackadar, Burke–Thompson)

  14. Ensemble mean convective precipitation: 2300 UTC 3 May to 0000 UTC 4 May (every 0.1 mm)

  15. ensemble mean ensemble spread 2000 J/kg 750 J/kg ensemble maximum ensemble minimum 2000 J/kg 1000 J/kg Convective Available Potential Energy (J/kg)

  16. ensemble mean ensemble spread 75 200 ensemble maximum ensemble minimum 200 200 Storm-Relative Helicity (m2 s–2)

  17. ensemble mean ensemble spread 40 20 ensemble minimum ensemble maximum 40 40 Bulk Richardson Number Shear (m2 s–2)

  18. Comparison • Both the high-resolution forecast and the ensemble forecasts did not put the bulk of the precipitation in the right place in central Oklahoma. • Both models indicated the potential for supercell thunderstorms with tornadoes in the Oklahoma–Texas region. • Both models were sensitive to the choice of parameterization schemes (e.g., PBL).

  19. Remaining Scientific Issues • When should forecasters believe the model forecast as a literal forecast? • What is the role of model formulation in predictability? • What is the value of mesoscale data assimilation in the initial conditions? • What constitutes an appropriate measure of mesoscale predictability? • What is the appropriate role of postprocessing model data (e.g., neural networks, bias-correction techniques)? • Other examples and further discussion will be found in a manuscript, currently in preparation.

More Related