1 / 19

Aaron Johnson and Xuguang Wang

Object Based Cluster-Analysis and Verification of a Convection-Allowing Ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment. Aaron Johnson and Xuguang Wang School of Meteorology and Center for Analysis and Prediction of Storms

rehan
Télécharger la présentation

Aaron Johnson and Xuguang Wang

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Object Based Cluster-Analysis and Verification of a Convection-Allowing Ensemble during the 2009 NOAA Hazardous Weather Testbed Spring Experiment Aaron Johnson and Xuguang Wang School of Meteorology and Center for Analysis and Prediction of Storms Fanyou Kong (CAPS), Ming Xue (SOM&CAPS), Kevin Thomas (CAPS), Keith Brewster (CAPS), Yunheng Wang (CAPS), Jidong Gao (NSSL) Warn-on-Forecast and High Impact Weather Workshop 9 February 2012

  2. Motivation • Convection allowing forecasts result in realistic looking convective systems for storm mode forecasts. (Coniglio et al. 2010). • Object based verification is more consistent with subjective evaluations of high resolution precipitation forecasts than traditional metrics (e.g., Davis et al. 2006a; Johnson et al. 2011a). • Physically descriptive diagnosis of errors • Deterministic, not just probabilistic, verification is needed for model development and ensemble design. • A different perspective on deterministic verification than traditional metrics (e.g., Kong et al. 2009) • Object based cluster-analysis can show impact of perturbation sources on forecast diversity (Yussouf et al. 2004) • Model dynamicshave a dominant impact on the 2009 CAPS ensemble clustering (Johnson et al. 2011b) • Object-based verification can help us understand the differences between ARW and NMM • Optimal grid spacing to balance computational cost and forecast quality is still an open question (e.g., Schwartz et al. 2009) • Is it worth going from 4 km to 1 km grid spacing?

  3. Outline • Overview of ensemble and object-based methodology • Object-based cluster analysis • Forecast object realism evaluated with sample-climate average of object attributes • Forecasts vs. observations • ARW vs. NMM • 1 km vs. 4 km • Forecast accuracy evaluated with object-based MMI and a newly proposed OTS • Forecasts vs. observations • ARW vs. NMM • 1 km vs. 4 km • Summary and Conclusions

  4. Assimilation of radar reflectivity and velocity using ARPS 3DVAR and cloud analysis for 17 members 10 members are from WRF-ARW, 8 members from WRF-NMM, and 2 members from ARPS. 20 members initialized 00 UTC, integrated 30 hours over near-CONUS domain on 26 days from 29 April through 5 June 2009, on 4 km grid without CP. Initial background field from 00 UTC NCEP NAM analysis. Coarser (~35 km) resolution IC/LBC perturbations obtained from NCEP SREF forecasts • Perturbations to Microphysics, Planetary Boundary Layer, Shortwave Radiation and Land Surface Model physics schemes.

  5. Example of Objects by MODE

  6. Object based scores • Similarity of two objects is quantified by Total Interest, I (0 < I < 1) • Function of area ratio, aspect ratio difference, orientation angle difference, centroid distance • Mean value of attributes is also used to evaluate overall realism of objects • Median of Maximum Interest (MMI; Davis et al. 2009) • Compute the maximum possible Total Interest (I) for an object, when compared with all other objects at that time. • Take the median of such maximum interests from all forecast and observed objects • Object based Threat Score (OTS; Johnson et al. 2011a) • Weight the area of each object by its Total Interest when compared to the corresponding object in the opposing field. • Sum over all pairs of corresponding objects and divide by total area of all objects:

  7. Need for object-based approach • ED= 1305 mm • 1-OTS = 0.486 • ED= 1595 mm • 1-OTS = 0.381 • OTS is subjectively more reasonable as a distance measure

  8. Advantage of object-based clustering NED HCA is strongly sensitive to locations and amplitude OTS HCA can form clusters based on storm modes • Similar to how we interpret them subjectively • Consistent with severe storm forecasting applications (Johnson et al. 2011ab, MWR)

  9. Object based clustering of 3-h forecasts • How sensitive are forecasts to different source of perturbations? • Quantify similarity with OTS • Members in same cluster are systematically more similar than those in different clusters • 3-hour forecasts are most sensitive to DA, model dynamics, and microphysics scheme (Johnson et al. 2011ab, MWR)

  10. Object based clustering of 24-h forecasts • Little impact of DA • Model dynamics still dominant • Secondary clustering by PBL schemes rather than microphysics

  11. Average attributes: forecast vs. observed • Too many objects forecast after 1-h lead time. • Average forecast object is smaller, more circular and farther east than average observed object.

  12. Average attributes: ARW vs. NMM • Objects from NMM model are on average more numerous, larger, more circular, more zonally oriented and, beginning at 18 UTC, farther south than ARW. • Number of objects, mean aspect ratio and mean angle are most similar to Observations for ARW. • Mean area is most similar to Observations for NMM.

  13. Average attributes: 1 km vs. 4 km • 1 km forecasts fewer, larger, less circular, and farther west on average than 4km. • 1 km is generally closer to obs for number of objects, area and E-W location than 4km.

  14. Accuracy: All members • OTS maximum at 12-h lead time caused by better forecasts of large precipitation systems at 12 UTC. Diurnal cycle similar to traditional ETS. • MMI maximum at 24-h lead time caused by realistic meso-scale placement of small precipitation systems at 00 UTC. • Control member is generally more accurate than perturbed members. • NO DA members were worst especially at early lead times. • NMM worse than ARW and ARPS.

  15. Accuracy: ARW vs. NMM • ARW group has significantly higher OTS and higher frequency of containing the best OTS than the NMM group except short lead times • Similar result for MMI but less pronounced • Diagnostics found NMM configurations best at maintaining initial storms and ARW configurations best at forecasting future storms

  16. Accuracy: 1 km vs. 4 km • No significant difference in OTS between 1 km and 4 km members. • 1 km member has significantly lower MMI at 12, 24-h lead times. • Lower MMI at 12-h lead time caused by missed observed objects, small objects in particular. • Consistent with worse under-forecasting at 12 UTC seen earlier

  17. Summary and Conclusions Clustering analysis: • Cluster analysis shows large impact of model dynamics on forecast clustering, even after bias adjustment, and additional impact of microphysics at 3-h lead time, PBL scheme at 24-h lead time Verification: Mean attributes • On average, forecast objects are too numerous, small, circular and east compared to observation. • ARW vs. NMM: ARW better for number of objects, mean aspect ratio and mean angle. NMM better for mean area. • 1km vs. 4km: After 1-h lead time, 1km better for number of objects, area and E-W location than 4km. Accuracy • After 1-h lead time, ARW members are more accurate than NMM members. For short lead time, NMM configurations seem to evolve assimilated storms better. • Generally similar accuracy at 1 and 4 km grid spacing.

  18. 4 instead of 16 km smoothing radius • Still less objects, but not as few as obs • Still larger area (better) • Now, similar aspect ratio (less rounded than obs QPE) • More similar location

More Related