1 / 20

Verification Studies of Model-Derived Cloud Fields in Support of Aviation Applications

Verification Studies of Model-Derived Cloud Fields in Support of Aviation Applications. Paul A. Kucera , Tara Jensen, Courtney Weeks, Cory Wolff, David Johnson, and Barbara Brown National Center for Atmospheric Research(NCAR)/Research Applications Laboratory(RAL)/Joint Numerical Testbed (JNT)

afia
Télécharger la présentation

Verification Studies of Model-Derived Cloud Fields in Support of Aviation Applications

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Verification Studies of Model-Derived Cloud Fields in Support of Aviation Applications Paul A. Kucera, Tara Jensen, Courtney Weeks, Cory Wolff, David Johnson, and Barbara Brown National Center for Atmospheric Research(NCAR)/Research Applications Laboratory(RAL)/Joint Numerical Testbed (JNT) Boulder, Colorado USA 02 December 2010

  2. NCAR’s Joint Numerical TestbedWho we are and What we do Developmental Testbed Center (DTC)National OfficeDirector: Bill Kuo Teams share staff Community Systems Shared Evaluation Methods Mission: To support the sharing, testing, and evaluation of research and operational NWP systems, and to facilitate the transfer of research capabilities to operational prediction centers The Developmental Testbed Center (DTC) is an integral part of the JNT (and vice versa). However, the DTC is a multi-institutional effort between JNT, AFWA, NOAA, and the NWP Community to help bridge research and operational NWP efforts - The DTC National Office is located in the JNT

  3. Aviation Related Evaluation within JNT We have two main aviation related projects within the JNT: HWT – DTC Collaboration Evaluation of simulated radar-based products NASA ROSES Project – Developing methods for the evaluation of satellite derived products

  4. DTC Objective Evaluation for2010 HWT Spring Experiment Severe QPF Aviation Evaluation: Evaluation: Evaluation: REFC or CREF 20, 25, 30, 35, 40, 50, 60 dBZ APCP and Prob. 0.5, 1.0, 2.0 inches in 3h and 6h RETOP 25, 30, 35, 40, 45 kFT

  5. Quick Glance at HWT-DTC Evaluation Results Frequency Bias indicates Ensemble Mean field exhibits a large over-forecast of areal coverage of cloud complexes CAPS Ensemble Mean Ratio of MODE Forecast objects to Observed objects implies the large frequency bias may be due to a factor of 3-5 over-prediction of forecast convective cells

  6. NASA ROSES Project: Developing methods for the evaluation of satellite derived products

  7. Satellite Cloud Evaluation Studies • NCAR/JNT has started to evaluate the use of A-Train observations for NWP and eventually aviation product evaluation • Currently focused on CloudSat-CALIPSO products • A goal of the project is to create a common toolkit in the Model Evaluation Tools (MET) for integrating satellite observations that will provide meaningful comparisons with NWP model output • Extend MET to include evaluations in the vertical plane

  8. MET Overview • MET is designed to evaluate a variety of datasets (e.g., rain gauge, radar, satellite, NWP products). The following statistical tools are available: • Grid-to-point verification (e.g., Compare NWP or satellite gridded products to rain gauge observations) • Grid-to-grid verification (e.g., Compare NWP or satellite products to radar products) • Advanced spatial verification techniques • Compare precipitation “features” in gridded fields

  9. Advanced Spatial Methods 24-h Precip forecast Precip analysis • Traditional statistics often are not able to account for spatial or temporal errors: • Displacement in time and/or space • Location • Intensity • Orientation • Spatial techniques such as Method for Object-based Diagnostic Evaluation (MODE) add value to the product evaluation

  10. Example A-train and NWP Comparison • We performed our comparison using the RUC (http://ruc.noaa.gov/) cloud top height and derived reflectivity products • Performed comparison at different spatial resolutions (13 and 20 km) over the continental US • Compared observed cloud top and vertical structure (reflectivity) with model derived fields

  11. Cloud Top Height Comparisons • Identified all CloudSat profiles and model grids that have cloud • Identified all model grid boxes containing at least 10 CloudSat points (roughly half the number of points that could be in a grid box) • Performed traditional statistics using multiple matching methods (nearest neighbor, mean, distance-weighted mean)

  12. Reflectivity Profile Comparisons • Used RUC native grid mixing ratios (cloud water, rain, ice, snow, and graupel) and convert the mixing ratios to an estimated reflectivity using the CSU Quickbeam tool • Retrieved a vertical plane in the model fields along the CloudSat path • Performed spatial comparison between observed and model fields

  13. Example: 06 September 2007

  14. Cloud Top Height Comparisons • Cloud Height Distributions along the track • The forecast distribution of cloud top is within the observed distribution

  15. Cloud Top Height Comparisons • Forecast mean and standard deviation with 95% confidence intervals • Evaluation is not sensitive to weighting scheme 0800 UTC

  16. Reflectivity Profile Comparisons • We are developing the tools to spatially and temporally evaluate model fields in the vertical plane • Match cloud objects along track, off track, and previous/past hours in time • The challenge is to create representative comparisons • Resolution differences • Not direct field comparison

  17. Spatial-based Comparison: Along Track (20 km Resolution)

  18. Spatial-based Comparison: Search “Off Track” for Best Match - The most intense features are identified - Search “off track” found better matches (indicating spatial or temporal errors in the forecast) -However, the coarse model resolution makes matches of objects difficult

  19. Future Work • Future updates to MET • Complete code to read A-Train products into MET • Apply and test object-based methods in the vertical plane in MET • Improve methods for verifying cloud and precipitation properties (e.g., how to compare different resolutions and model parameters) • Implement a display tool for visualization of satellite and model products within MET

  20. Future Work • Future A-train comparisons • Cloud base, cloud type, cloud water content, ice water content, etc. • Evaluation of other weather features • Tropical storms, multilayer clouds, clouds over complex terrain, etc. • We would like to extend the tool to other satellite datasets (e.g., TRMM) and to other model products

More Related