1 / 76

Major Improvements in Mesoscale Prediction

Problems With Model Physics in Mesoscale Models Clifford F. Mass, University of Washington, Seattle, WA. Major Improvements in Mesoscale Prediction. Major improvements in the skill of mesoscale models as resolution has increased to 3-15 km.

zia
Télécharger la présentation

Major Improvements in Mesoscale Prediction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Problems With Model Physics in Mesoscale ModelsClifford F. Mass, University of Washington, Seattle, WA

  2. Major Improvements in Mesoscale Prediction • Major improvements in the skill of mesoscale models as resolution has increased to 3-15 km. • Since mesoscale predictability is highly dependent on synoptic predictability, advances in synoptic observations and data assimilation have produced substantial forecast skill benefits. • Although model physics has improved there are still major weaknesses that need to be overcome.

  3. Important to Know the Strengths and Weaknesses of Our Tools

  4. Very Complex Because Model Physics Interaction With Each Other—AND Model Dynamics

  5. Some Physics Issues with the WRF Model that Are Shared With Virtually All Other Mesoscale Models

  6. Overmixing in Mesoscale Models • Most mesoscale models have problems in maintaining shallow, stable cool layers near the surface. • Excessive mixing in the vertical results in excessive temperatures at the surface and excessive winds under stable conditions. • Such periods are traditionally ones in which weather forecasters can greatly improve over the models or models/statistical post-processing

  7. Coldspell Time series of bias in MAX-T over the U.S., 1 August 2003 – 1 August 2004. Mean temperature over all stations is shown with a dotted line. 3-day smoothing is performed on the data.

  8. Shallow Fog…Nov 19, 2005 • Held in at low levels for days. • Associated with a shallow cold, moist layer with an inversion above. • MM5 and WRF predicted the inversion…generally without the shallow mixed layer of cold air a few hundred meters deep • MM5 or WRF could not maintain the moisture at low levels

  9. Observed Conditions

  10. High-Resolution Model Output

  11. So What is the Problem? • We are using the Yonsei University (YSU) scheme in most work. We have tried all available WRF PBL schemes…no obvious solution in any of them. Same behavior obvious in other models and PBL parameterizations. • Doesn’t improve going from 36 to 12 km resolution, 1.3 km slightly better. • There appears to be common flaws in most boundary layer schemes especially under stable conditions.

  12. Problems with WRF surface winds • WRF generally has a substantial overprediction bias for all but the lightest winds. • Not enough light winds. • Winds are generally too geostrophic over land. • Not enough contrast between winds over land and water. • This problem is evident virtually everywhere and appears to occur in all PBL schemes available with WRF. • Worst in stable conditions.

  13. 10-m wind bias, 00 UTC, 24-h forecast, Jan 1-Feb 8, 2010

  14. 10-m wind bias, 12 UTC, 12-h forecast, Jan 1-Feb 8, 2010

  15. The Problem

  16. Insufficient Contrast Between Land and Water

  17. This Problem is Evident in Many Locations

  18. Northeast U.S. from SUNY Stony Brook (Courtesy of Brian Colle): 12-36 hr wind bias for NE US: additive bias (F-O)

  19. SUNY Stony Brook: Wind Bias over Extended Period for One Ensemble Member

  20. U.S. Army WRF over Utah

  21. Cheng and Steenburgh 2005(circles are WRF)

  22. UW WRF 36-12-4km: Positive Bias Change in System July 2006 Now

  23. Wind Direction Bias: Too Geostrophic

  24. MAE is something we like to forget…

  25. Surface Wind Problems • Clearly, there are flaws in current planetary boundary layer schemes. • But there also be another problem?—the inability to consider sub-grid scale variability in terrain and land use.

  26. The 12-km grid versus terrain

  27. A new drag surface drag parameterization • Determine the subgrid terrain variance and make surface drag or roughness used in model dependent on it. • Consulting with Jimy Dudhia of NCAR came up with an approach—enhancing u* and only in the boundary layer scheme (YSU). • For our 12-km and 36-km runs used the variance of 1-km grid spacing terrain.

  28. 38 Different Experiments: Multi-month evaluation winter and summer

  29. Some Results for Experiment “71” • Ran the modeling system over a five-week test period (Jan 1- Feb 8, 2010)

  30. 10-m wind speed bias: Winter Original

  31. With Parameterization

  32. MAE 10m wind speed

  33. With Parameterization

  34. Case Study: Original

  35. New Parameterization

  36. Old New

  37. During the 1990’s it became clear that there were problems with the simulated precipitation and microphysical distributions • Apparent in the MM5 forecasts at 12 and 4-km • Also obvious in research simulations of major storm events.

  38. Early Work-1995-2000 (mainly MM5, but results are more general) • Relatively simple microphysics: water, ice/snow, no supercooled water, no graupel • Tendency for overprediction on the windward slopes of mountain barriers. Only for heaviest observed amounts was there no overprediction. • Tendency for underprediction to the lee of mountains

  39. MM5 PrecipBias for24-h90% and 160% lines are contoured with dashed and solid lines For entire Winter season

  40. Testing more sophisticated schemes and higher resolution ~2000 • Testing of ultra-high resolution (~1 km) and better microphysics schemes (e.g., with supercooled water and graupel), showed some improvements but fundamental problems remained: e.g., lee dry bias, overprediction for light to moderate events, but not the heaviest. • Example: simulations of the 5-9 February 1996 flood of Colle and Mass 2000.

  41. 5-9 February 1996 Flooding Event

  42. MM5: Little Windward Bias, Too Dry in Lee Windward slope Lee Bias: 100%-no bias

  43. Flying Blind

More Related