1 / 41

2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007

2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007. James L. Franklin NHC/TPC. Verification Rules. System must be a tropical (or subtropical) cyclone at both the forecast time and verification time, includes depression stage (except as noted).

arnon
Télécharger la présentation

2006 NHC Verification Report Interdepartmental Hurricane Conference 5 March 2007

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 2006 NHC Verification ReportInterdepartmental Hurricane Conference5 March 2007 James L. Franklin NHC/TPC

  2. Verification Rules • System must be a tropical (or subtropical) cyclone at both the forecast time and verification time, includes depression stage (except as noted). • Verification results are final (until we change something). • Special advisories ignored; regular advisories verified. • Skill baselines for track is revised CLIPER5 (updated developmental data to 1931-2004 [ATL] and 1949-2004 [EPAC]), run post-storm on operational compute data. • Skill baseline for intensity is the new decay-SHIFOR5 model, run post-storm on operational compute data (OCS5). Minimum D-SHIFOR5 forecast is 15 kt. • New interpolated version of the GFDL: GHMI. Previous GFDL intensity forecast is lagged 6 h as always, but the offset is not applied at or beyond 30 h. Half the offset is applied at 24 h. Full offset applied at 6-18 h. ICON now uses GHMI.

  3. Decay SHIFOR5 Model • Begin by running regular SHIFOR5 model. • Apply the DeMaria module to adjust intensity of tropical cyclones for decay over land. This includes recent adjustments for less decay over skinny landmasses (estimates fraction of the circulation over land). • Algorithm requires a forecast track. For a skill baseline, CLIPER5 track is used. (OFCI could be used if the intent was to provide guidance).

  4. 2006 Atlantic Verification VT NT TRACK INT (h) (n mi) (kt) ============================ 000 241 9.5 2.1 012 223 29.7 6.5 024 205 50.8 10.0 036 187 71.9 12.4 048 169 97.0 14.3 072 132 148.7 18.1 096 100 205.5 19.6 120 78 265.3 19.0 Values in green meet or exceed all-time records. * 48 h track error for TS and H only was 96.6 n mi.

  5. Track Errors by Storm

  6. Track Errors by Storm

  7. 2006 vs. 5-Year Mean

  8. New 5-Year Mean 55 n mi/day

  9. OFCL Error Distributions

  10. Errors cut in half since 1990

  11. Mixed Bag of Skill

  12. 2006 Track Guidance (Top Tier)

  13. 2nd Tier Early Models

  14. 2006 Late Models

  15. Guidance Trends

  16. Goerss Corrected Consensus CGUN 120 h FSP: 33% CCON 120 h FSP: 36% Small improvements of 1-3%, but benefit lost by 5 days.

  17. FSU Superensemble vs Goerss Corrected Consensus

  18. FSU Superensemble vs Other Consensus Models

  19. 2006 vs 5-Year Mean

  20. No progress with intensity

  21. Skill sinking faster than dry air over the Atlantic

  22. Intensity Guidance

  23. Dynamical Intensity Guidance Finally Surpasses Statistical Guidance

  24. Intensity Error Distribution When there are few rapid-intensifiers, OFCL forecasts have a substantial high bias. GHMI had larger positive biases, but higher skill (i.e., smaller but one-sided errors).

  25. FSU Superensemble vs Other Consensus Models

  26. 2006 East Pacific Verification VT N Trk Int (h) (n mi) (kt) ======================== 000 379 8.8 1.7 012 341 30.2 6.8 024 302 54.5 11.2 036 264 77.4 14.6 048 228 99.7 16.1 072 159 142.3 17.8 096 107 186.1 19.3 120 71 227.5 18.3 Values in green represent all-time lows.

  27. 2006 vs 5-Year Mean

  28. Errors cut by 1/3 since 1990

  29. OFCL Error Distributions

  30. Skill trend noisy but generally upward

  31. 2006 Track Guidance (1st tier) Larger separation between dynamical and consensus models (model errors more random, less systematic).

  32. FSU Superensemble vs Other Consensus Models

  33. Relative Power of Multi-model Consensus ne = 1.65 ne = 2.4

  34. Same as it ever was…

  35. …same as it ever was.

  36. 2006 Intensity Guidance

  37. FSU Superensemble vs Other Consensus Models

  38. Summary • Atlantic Basin - Track • OFCL track errors set records for accuracy from 12-72 h. Mid-range skill appears to be trending upwards. • OFCL track forecasts were better than all the dynamical guidance models, but trailed the consensus models slightly. • GFDL, GFS, and NOGAPS provided best dynamical track guidance at various times. UKMET trailed badly. No (early) dynamical model had skill at 5 days! • ECMWF performed extremely well, when it was available, especially at longer times. Small improvement in arrival time would result in many more EMXI forecasts. • FSU super-ensemble not as good as Goerss corrected consensus, and no better than GUNA in a three-year sample.

  39. Summary (2) • Atlantic Basin - Intensity • OFCL intensity errors were very close to the long-term mean, but skill levels dropped very sharply (i.e., even though Decay-SHIFOR errors were very low, OFCL errors did not decrease). The OFCL errors also trailed the GFDL and ICON guidance. • For the first time, dynamical intensity guidance beat statistical guidance. • OFCL forecasts had a substantial high bias. Even though the GFDL had smaller errors than OFCL, its bias was larger. • FSU super-ensemble no better than a simple average of GFDL and DSHP (three-year sample).

  40. Summary (3) • East Pacific Basin - Track • OFCL track errors up, skill down in 2006, although errors were slightly better than the long-term mean. • OFCL beat dynamical models, but not the consensus models. Much larger difference between dynamical models and the consensus in the EPAC (same as 2005). • FSU super-ensemble no better than GUNA (two-year sample).

  41. Summary (4) • East Pacific Basin - Intensity • OFCL intensity errors/skill show little improvement. • GFDL beat DSHP after 36 h, but ICON generally beat both. • FSU super-ensemble slightly better than ICON at 24-48 h, but worse than ICON after that.

More Related