1 / 31

GOES-R AWG Product Validation Tool Development

GOES-R AWG Product Validation Tool Development. Winds Team Products Derived Motion Winds Hurricane Intensity Jaime Daniels (STAR). Wayne Bresky (IMSG, Inc) Steve Wanzong (CIMSS) Chris Velden (CIMSS) Andy Bailey (IMSG). Tim Olander (CIMSS) Chris Velden (CIMSS). OUTLINE.

kobe
Télécharger la présentation

GOES-R AWG Product Validation Tool Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GOES-R AWG Product Validation Tool Development Winds Team Products Derived Motion Winds Hurricane Intensity Jaime Daniels (STAR) Wayne Bresky (IMSG, Inc) Steve Wanzong (CIMSS) Chris Velden (CIMSS) Andy Bailey (IMSG) Tim Olander (CIMSS) Chris Velden (CIMSS)

  2. OUTLINE • Example Product Output • Validation Strategies • Routine Validation Tools • “Deep-Dive” Validation Tools

  3. Derived Motion Winds

  4. Example Output Long-wave IR Cloud-drift Winds Cloud-drift Winds derived from a Full Disk Meteosat-8 SEVERI 10.8 µm image triplet centered at 1200 UTC 01 February 2007 4 High-Level 100-400 mb Mid-Level 400-700 mb Low-Level >700 mb

  5. Example Output Visible Cloud-drift Winds Cloud-drift Winds derived from a Full Disk Meteosat-8 SEVERI 0.60 um image triplet centered at 1200 UTC 01 February 2007 5 Low-Level >700 mb

  6. Validation Strategies • Routinely generate Derived Motion Wind (DMW) product in real-time using available ABI proxy data • Acquire reference/”ground truth” data and collocate DMW product • Radiosondes, GFS analysis, Wind profilers, CALIPSO • Analyze and visualize data (imagery, GFS model, L2 products, intermediate outputs, reference/ground truth) using available and developed (customized) tools • Measure performance • Modify L2 product algorithm(s), as necessary

  7. Validation Strategies Radiosondes GFS Analyses CALIPSO Derived Motion Wind Product MET-9 SEVIRI Full Disk Imagery Routine generation of winds product Collocate DMW product with reference/ground truth data Clear-Sky Mask & Cloud Products DMW/Radiosondes GFS forecast files (GRIB2) Analyze/ Visualize DMW / GFS Analyses Update L2 Product Algorithm(s), as necessary DMW / CALIPSO Compute comparison statistics Display Product & Ground Truth Data Re-retrieve single DMW Search for outliers Perform Case Study Analysis

  8. Routine Validation ToolsProduct Visualization … McIDAS-V Heavy reliance on McIDAS to visualize DMW products, intermediate outputs, diagnostic data, ancillary datasets, and reference/”ground-truth” McIDAS-X

  9. Routine Validation ToolsProduct Visualization … Java-based program written to display satellite winds vectors over a false color image

  10. Validate Routine Validation ToolsCollocation Tools… • Collocation Software (DMW and Reference/”Ground Truth” Winds) • Radiosondes • GFS Analysis • Customized code (built on top of McIDAS) to perform the routine daily collocation of Level-2 products with their associated reference (“truth”) observations • Creation of comprehensive collocation databases that contain information that enables comparisons, “error” analyses Satellite/Raob winds Satellite/GFS Winds

  11. Routine Validation ToolsComparison Statistics… GOES-13 CD WIND RAOB MATCH ERROR STATISTICS PRESSURE RANGE: 100 - 1000 LATITUDE RANGE: -90 - 90 SAT GUESS RAOB RMS DIFFERENCE (m/s) 6.68 6.11 NORMALIZED RMS 0.34 0.31 AVG DIFFERENCE (m/s) 5.51 5.02 STD DEVIATION (m/s) 3.78 3.48 SPEED BIAS (m/s) -0.97 -1.32 |DIRECTION DIF| (deg) 14.85 15.06 SPEED (m/s) 18.55 18.20 19.52 SAMPLE SIZE 87100 • Customized codes that enable the generation and visualization of comparison statistics • Text reports • Creation of a database of statistics enabling time series of comparison statistics to be generated • Use the PGPLOT Graphics Subroutine Library • Fortran- or C-callable, device-independent graphics package for making various scientific graphs • Visualize contents of collocated databases • McIDAS is used Satellite DMW vs. Raob Wind OR Satellite DMW vs. GFS Analysis Wind

  12. Routine Validation ToolsComparison Statistics… Retrieved Winds (100-400 mb) vs Radiosonde Winds Retrieved Winds (400-700 mb) vs Radiosonde Winds

  13. Example Scatter Plot Generated with PGPLOT Version 3 vs. Version 4 Performance … Black – Version 3 Algorithm RMS: 7.78 m/s MVD: 6.14 m/s Spd Bias: -2.00 m/s Speed: 17.68 m/s Sample: 17,362 Light Blue – Version 4 Algorithm (Nested Tracking) RMS: 6.89 m/s MVD: 5.46 m/s Spd Bias: -0.18 m/s Speed: 17.91 m/s Sample: 17,428 LWIR Cloud-drift Winds August 2006 Meteosat-8, Band 9 Sat Wind Speed (m/s) Radiosonde Wind Speed (m/s)

  14. Validation Strategies Radiosondes GFS Analyses CALIPSO Derived Motion Wind Product MET-9 SEVIRI Full Disk Imagery Routine generation of winds product Collocate DMW product with reference/ground truth data Clear-Sky Mask & Cloud Products DMW/Radiosondes GFS forecast files (GRIB2) Analyze/ Visualize DMW / GFS Analyses Update L2 Product Algorithm(s), as necessary DMW / CALIPSO Compute comparison statistics Display Product & Ground Truth Data Re-retrieve single DMW Search for outliers Perform Case Study Analysis

  15. ”Deep-Dive” Validation Tools “Stand-alone re-retrieval & visualization tool “ that enables the generation of a single derived motion wind vector for a single target scene and allows for the visualization of wind solution, tracking diagnostics, target scene characteristics . PGPLOT library used…. Line Displacement Element displacement Control – 15x15 (Speed: 12m/s) Control – 15x15 (Speed: 12m/s) Cluster 2 Speed: 30m/s Cluster 1 Speed: 15m/s Largest Cluster measuring motion of front Second Cluster measuring motion along front; matches raob

  16. Spatial coherence threshold ”Deep-Dive” Validation Tools • “Stand-alone re-retrieval & visualization tool “ that enables the generation of a single derived motion wind vector for a single target scene and allows for the visualization of wind solution, tracking diagnostics, target scene characteristics . PGPLOT library used…. Target Scene Characteristics Feature Tracking Diagnostics Correlation Surface Plots Spatial Coherence Plots

  17. CALIPSO Cloud Height Satellite Wind Height ”Deep-Dive” Validation Tools Using CALIPSO/CloudSat Data to Validate Satellite Wind Height Assignments • Winds team continues to work closely with the cloud team on cloud height problem (case studies, most recently) • Leverages unprecedented cloud information offered by CALIPSO and CloudSat measurements • Enables improved error characterization of satellite wind height assignments • Enables feedback for potential improvements to satellite wind height assignments • Improvements to overall accuracy of satellite-derived winds GOES-12 Cloud-drift Wind Heights Overlaid on CALIPSO total attenuated backscatter image at 532nm Work in progress…

  18. ”Deep-Dive” Validation Tools Radiosonde Visualization of reference/”ground truth” data using McIDAS-V… Done using McIDAS-V

  19. ”Deep-Dive” Validation Tools At what height does satellite wind “best fit”?

  20. ”Deep-Dive” Validation Tools 100 – 250 hPa 251 – 350 hPa351 – 500 hPa The search for outliers… Vector Difference > 20 m/s Large wind barbs are GFS Analysis winds at 200 hPa.

  21. Come see our Derived Motion Winds Posters :”GOES-R AWG Winds Team: Current Validation Activities”(Steve Wanzong is manning this poster) ” New Methods for Minimizing the Slow Speed Bias Associated with Atmospheric Motion Vectors (AMVs)”(Wayne Bresky is manning this poster)

  22. Hurricane Intensity

  23. Hurricane Intensity Product Current Intensity “Bulletin” • HIE algorithm output is purely textual (specifically it consists of the current TC intensity in terms of wind speed in m/s). No product displays are required. Examples of output “Tailored Products” are provided. History File Listing

  24. Validation Strategies • HIE intensity estimates (stored in HIE history files) can be validated against two different “ground truth” data sets either in real-time of post-storm, depending on the data set used in the process. • In situ aircraft reconnaissance measurements of maximum wind speed. • May not be available for part or all of the storm lifetime, depending on where the storm track is located. • “Working” and “Final” Best Track storm intensity history. • Available for entire storm lifetime, but may not be based entirely on in situ data. • Working Best Track is available in real-time during the storm lifetime. It may not be accurate due to bad observational data, inaccurate Dvorak estimates, or TC forecaster error. • Final Best Track are made available after extensive analysis of all in situ observations, estimates from remote sensing methods/applications, and TC forecast methodology have been examined. • “Ground Truth” data can be easily obtained via NOAA “Family of Services” or FTP sites (such as NOAA/NHC)

  25. Routine Validation Tools • Datasets will include the HIE history file output for each storm being analyzed. The history files will be compared directly to the in situ aircraft reconnaissance measurements of TC intensity or the Best Track intensity for the storm in question. • The HIE validation suite will produce statistical comparisons of the HIE intensity estimates and the validation data. The statistical analysis will be provided in terms of wind speed (in m/s) precision and accuracy metrics as well as additional error metrics utilized at operational NOAA TC forecasting and analysis centers. • HIE Validation analysis suite has already been used by an operational TC forecast center (NOAA/SAB) to verify the ADT/HIE, so it is already familiar to organizations who wish to validate the HIE. • Output products are ASCII text files derived using a series of C programs and shell scripts. No proprietary software is currently used.

  26. Routine Validation Tools INTENSITY ERRORS (wind speed : m/s) bias rmseaaestdvcnt ADT:07L 2.04 7.33 5.61 7.04 23 ADT-BestTrack Intensity Differences dCAT ALL TD TS H12 H35 <-20 0 0 0 0 0 -20 0 0 0 0 0 -15 1 0 0 0 1 -10 1 0 0 0 1 -5 4 0 1 1 2 0 9 0 1 6 2 +5 4 0 0 1 3 +10 2 0 0 0 2 +15 2 0 0 0 2 +20 0 0 0 0 0 >+20 0 0 0 0 0 23 0 2 8 13 dCAT ALL TD TS H12 H35 <= 2.5: 39.1% 0.0% 50.0% 75.0% 15.4% <= 7.5: 73.9% 0.0% 100.0% 100.0% 53.8% >10.0: 17.4% 0.0% 0.0% 0.0% 30.8% • Current intensity validation statistical output example • Intensity statistical error analysis versus ‘ground truth” (either reconnaissance and/or NHC Best Track information) • Accuracy and precision measurements are displayed for the storm in question • Categorical differences in ADT differences from “ground truth” can provide quick overview of any intensity estimate biases • Output layout mirrors output parameters as utilized in operations by NOAA/SAB

  27. Deep-Dive Validation Tools Timeline of HIE and NESDIS/Satellite Analysis Branch (SAB) intensity estimates versus NHC Best Track • Graphical timeline example of HIE analysis versus observational data and/or TC forecast center Dvorak estimates • Allows for quick analysis of the accuracy of the HIE performance versus subjective Dvorak estimates and/or “ground truth” • Plots can be provided in real-time or in post-storm analysis mode • SAB Dvorak estimates and NHC Best Track are displayed here

  28. Deep-Dive Validation Tools • Histogram of HIE and operational center intensity estimates differences from “ground truth” • Provides easy display of errors between the two methodologies • Can easily identify any biases in intensity differences in either set of estimates • HIE versus SAB Dvorak intensity differences from NHC Best Track are shown in the graph to the right Histograms of HIE and NESDIS/SAB intensity estimates differences versus NHC Best Track

  29. Deep-DiveValidation Tools • Display HIE automated storm center position versus “ground truth” and forecast interpolation positions • Provides visual method to determine accuracy of automated storm center selection position • Can be used to assess accuracy of current storm forecast from issuing TCFC • Can be compared to aircraft reconnaissance, if available Example of image displaying storm center location information. Reconnaissance HIE Determined Forecast Interpolation

  30. Deep-DiveValidation Tools Storm center positioning errors • Current intensity validation statistical output example • Storm center positioning error analysis versus ‘ground truth” (either reconnaissance and/or NHC Best Track information) • Accuracy and precision measurements are displayed for the storm in question or for entire ocean basin and season • Comparisons with manual positions from TCFC can be output, if available • Output layout mirrors output parameters as utilized in current operations by NOAA/SAB POSITIONING ERRORS (distance in nmi) OVERALL bias rmseaaestdvcnt SAB:LAT 0.00 0.17 0.12 0.17 118 SAB:LON -0.07 0.24 0.17 0.23 118 SAB:DIST 13.86 118 ADT:LAT 0.04 0.22 0.17 0.22 118 ADT:LON -0.05 0.31 0.22 0.31 118 ADT:DIST 18.16 118 Estimated Position Error (nmi) by Fix Method Method Num (%) ADT SAB FORECAST 77 ( 65%) 20.1 14.9 SPIRAL 29 ( 24%) 16.8 13.9 COMBO 12 ( 10%) 8.9 7.2 EXTRAP 0 ( 0%) 0.0 0.0 OVERALL 118 18.2 13.9

  31. Come see our Hurricane Intensity Validation Poster:” The GOES-R Hurricane Intensity Estimation (HIE) Algorithm Overview of Validation Activity and Methodology” (Tim Olander is manning this poster)

More Related