1 / 36

GOES-R AWG Product Validation Tool Development

GOES-R AWG Product Validation Tool Development. AWG GRAFIIR Team June 16, 2011 Presented by: Mat Gunshor of CIMSS Ray Garcia, Allen Huang, Graeme Martin, Eva Schiffer, Hong Zhang and others (CIMSS/UW-Madison). Products. Baseline Products GRAFIIR can currently run (L1 and L2+): Radiances

amalie
Télécharger la présentation

GOES-R AWG Product Validation Tool Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GOES-R AWG Product Validation Tool Development AWG GRAFIIR TeamJune 16, 2011 Presented by: Mat Gunshor of CIMSS Ray Garcia, Allen Huang, Graeme Martin, Eva Schiffer, Hong Zhang and others (CIMSS/UW-Madison)

  2. Products Baseline Products GRAFIIR can currently run (L1 and L2+): • Radiances • Validation of radiance data at the pixel level is a core function of GRAFIIR capability. • Clouds • Clear Sky Mask; Cloud Optical Depth; Cloud Particle Size; Cloud Top Phase; Cloud Top Height; Cloud Top Pressure; Cloud Top Temperature • Soundings • Legacy Vertical Moisture Profile; Legacy Vertical Temperature Profile; Derived Stability Indices (CAPE, LI, etc); Total Precipitable Water. • Fire Hot Spot Characterization • Imagery • Derived Motion Winds • Land Surface/Skin Temperature • Hurricane Intensity • Volcanic Ash Detection Baseline algorithms are currently produced in GEOCAT here.

  3. Products • In the future GRAFIIR expects to be able to run all of the AWG ABI baseline products by employing the AIT Framework as the processing end of the system. • We expect that eventually all ABI Baseline and Option 2 products will be available to GRAFIIR via the AIT Framework

  4. Products Bands may also be used by needed “upstream” products, such as the cloud mask.

  5. Products

  6. Validation Strategies • GRAFIIR seeks to be able to validate all of the ABI L1 data and L2+ products in the context of analyzing ABI instrument waiver requests from the vendor • By manipulating ABI proxy data to reflect instrument effects, GRAFIIR compares algorithm results “before” and “after” instrument effects are introduced to proxy data. The objective is to assess the effects of an instrument waiver on product performance for products that require the affected band(s). • Current capability: product output comparisons, provide statistical analysis, and generate reports automatically through Glance. • Future strategy: Obtain the AIT Framework in order to gain the capability of generating any ABI L2 product. • The Framework must be maintained and kept in sync with the AIT version. • It will remain important in the future to maintain synergy between NESDIS scientists and the algorithm developers (at cooperative institutes, for example) by employing the same environment for development (The AIT Framework).

  7. Routine Validation Tools • GRAFIIR has developed validation tools as part of its mission to assess instrument effects on ABI data and products. • The idea of “routine” perhaps does not fit. • Tool development has naturally grown to fit needs. • Tools in use now are more of the deep-dive variety. • GRAFIIR Vision: make the current tools more easily automatable • An automatable version of GEOCAT/Framework paired with Glance and the collocation tools would give many product algorithm teams the ability to easily validate their products against a variety of “truth” datasets.

  8. ”Deep-Dive” Validation Tools • The validation tools used for GRAFIIR vary due to the nature of the instrument waiver instead of based on product type. • ABI instrument effects that can be applied to simulated ABI Proxy Data from the WRF model. • ABI instrument effects that cannot be (easily/quickly) applied to simulated ABI Proxy Data from WRF model. • The AWG GRAFIIR Team has responded to ABI waivers for both situations and have developed tools accordingly. • The best validation tool GRAFIIR has is Glance • This is the most easily applicable, cross-cutting tool we have available to other AWG teams and the AIT. • Glance can be used with both L1 data and L2+ products.

  9. ”Deep-Dive” Validation Tools Glance could make this easier!

  10. ”Deep-Dive” Validation Tools Can you see a difference?

  11. ”Deep-Dive” Validation Tools

  12. ”Deep-Dive” Validation Tools • The validation tools used for GRAFIIR vary due to the nature of the instrument waiver instead of based on product type. • ABI instrument effects that can be applied to simulated ABI Proxy Data from the WRF model. • ABI instrument effects that cannot be (easily/quickly) applied to simulated ABI Proxy Data from WRF model.

  13. ”Deep-Dive” Validation Tools • ABI instrument effects that can be applied to simulated ABI Proxy Data from the WRF model. • Example: Striping in one or more spectral bands. • Example: Increased noise in one or more spectral bands. • Example: Navigation errors in one or more spectral bands. • Note: ABI specifications exist for all of these parameters and a waiver is only required for when the expected instrument performance will be worse than the specs. • When the effect is relatively easy to simulate in the existing simulated ABI proxy data sets, the process if fairly straightforward.

  14. ”Deep-Dive” Validation Tools • ABI instrument effects that can be applied to simulated ABI Proxy Data from the WRF model. • There are 4 primary steps to analyzing a waiver such as this: • Simulate the instrument effect in the proxy data. (MATLAB) • This is the least straight-forward part of the process – depending on what the waiver is for and how it is deemed to affect the radiance data. • Produce products that rely on the affected spectral bands using data before and after the instrument effect was introduced. (GEOCAT) • Could be done in the Framework; This is a straightforward step for an analyst familiar with the software. • Compare “before” and “after” products; analyze differences (Glance) • Glance can read multiple file types and provide a variety of types of analysis • Obtain expert analysis of the results • Typically we get input from the algorithm scientists and generate a PowerPoint presentation that also serves as a report.

  15. ”Deep-Dive” Validation Tools • The following slides are an example of this first type of validation analysis • Existing proxy data are altered to reflect the effects of some out-of-spec component of the instrument. In this case, we’re pretending that we have one line of the detector array in one spectral band that is noisier than it should be. • First, the instrument effect is simulated in proxy data. The validation shown here is visual, but we do statistical validation in this step as well; for instance the random noise generated is tested (is it normally distributed noise, predictable standard deviation). This is done in MATLAB. • Second, products are generated that use this spectral band. This step is done in Geocat or could be done in the Framework. • Third, we analyze the difference in the products generated using “control” and “waiver” data. We only show cloud top height here. This step is done using Glance.

  16. ”Deep-Dive” Validation Tools The Control Case: Magnified by 3x and focused on the Texas/Oklahoma border area convection where one of the out-of-spec lines passes through.

  17. ”Deep-Dive” Validation Tools The Waiver Case: Magnified by 3x and focused on the Texas/Oklahoma border area convection where one of the out-of-spec lines passes through.

  18. ”Deep-Dive” Validation Tools The Difference: Magnified by 3x, the out-of-spec line is evident in the brightness temperature difference image.

  19. ”Deep-Dive” Validation Tools • http://cimss.ssec.wisc.edu/goes_r/grafiir/PC-1134/Clouds_epsilon0/ • HTML Report generated by Glance with statistics and images. • The “Zero Tolerance” analysis shows all the absolute changes introduced by one of out-of-spec line noise. • Follow the link for the statistical report (click on a product variable name to see the reports for each one) • Cloud Top Height • Cloud Top Pressure • Cloud Top Temperature • Cloud Mask (unaffected) • Cloud Phase (unaffected) • Cloud Type (unaffected)

  20. ”Deep-Dive” Validation Tools Difference Image: Most of the image is a difference of 0. This is Cloud Top Height, but the image looks similar for Cloud Top Pressure and Cloud Top Temperature; from Glance.

  21. ”Deep-Dive” Validation Tools “Trouble Points”: Trouble points are marked for any pixel in the two output files whose difference exceeds epsilon (which is 500m in this case). Cloud Top Height shown, from Glance.

  22. ”Deep-Dive” Validation Tools The statistics are from Glance

  23. ”Deep-Dive” Validation Tools • ABI instrument effects that cannot be (easily/quickly) applied to simulated ABI Proxy Data from WRF model. • Example: Out-of-Spec Spectral Response Functions • If the effect cannot be easily or accurately replicated in the simulated data it means we cannot generate products in GEOCAT and compare the outputs in Glance. • SRF changes are generally too time-consuming to get into the proxy data because they involve altering the forward model which. Proxy data generated from forward model calculations using forecast model atmospheric profile information is temporally expensive to produce and we typically only have 1-2 weeks to respond to a waiver.

  24. ”Deep-Dive” Validation Tools • ABI instrument effects that cannot be (easily/quickly) applied to simulated ABI Proxy Data from WRF model. • Example: Out-of-Spec Spectral Response Functions • There are 3 primary steps to analyzing a waiver such as this: • Simulate the instrument effect in proxy data (MATLAB) • For example, convolve high spectral resolution data with before/after SRFs • Since products cannot be generated, use alternatives (MATLAB) • In the case of SRFs, compare the brightness temperatures of convolved high spectral resolution data (e.g. IASI) and compare differences to the spec noise to get an understanding of their significance. • We have to be sure we are still measuring key components (e.g. SO2). • The products we can run in GEOCAT have all had analysis done on them previously with “pure” proxy data compared to data with spec noise added. • Obtain expert analysis of the results • Typically we get input from the algorithm scientists and generate a PowerPoint presentation that also serves as a report.

  25. ”Deep-Dive” Validation Tools The following slides are an example of this second type of validation analysis • The 8.5um band SRF may be slightly out of spec • Will we still be able to see SO2? • How will the radiances be affected? • First, SRFs must be obtained and altered. • We were given SRFs which were “compliant” and “non-compliant” with the specs. • These are not being shown here to avoid ITAR designation • Second, radiances and brightness temperatures are generated from both calculated and measured high spectral resolution data. • We had some calculated spectra available with various amounts of SO2 • Third, we analyze the differences in the radiances of the compliant and non-compliant SRFs. • These are compared to the spec noise for perspective.

  26. ”Deep-Dive” Validation Tools Non-Compliant 8.5 SRF convolved with IASI Compliant 8.5 SRF convolved with IASI Brightness temperature difference Image

  27. ”Deep-Dive” Validation Tools Spec-Compliant – Non-Spec-Compliant 8.5um Radiance Difference to NEdN Ratio Ratio of the radiance difference (Spec Compliant minus Non-Spec Compliant) to the spec noise (NEdN) in this band (0.1303). When this ratio is less than 1 it means the difference is less than the spec NEdN.

  28. ”Deep-Dive” Validation Tools • Using Glance to compare L2+ Product Output to a “truth” dataset that was generated as product output. • Most of GRAFIIR’s waiver tasks are to measure the effects of a change on product output. • But many algorithm teams have a need to validate their product against another type of measured data to quantify product performance. • Glance can be used to do more validation. • As teams learn what their needs are and develop capabilities we hope to be able to merge these ideas. • Ideally, scientists should be doing more analysis and not have to worry about the traditionally difficult tasks of collocating data, processing it, etc.

  29. ”Deep-Dive” Validation Tools • Example: Using Glance to compare WRF model output cloud top temperature to the AWG product algorithm cloud top temperature generated with the simulated Proxy ABI data (generated from the WRF-model output). • The WRF model cloud top information is treated as “truth” • We expect there to be differences because the model reports a cloud top when it is too optically thin to be detected by ABI. • So WRF model cloud tops should be higher and colder than those in the proxy data. • Note: One file is an hdf file output from GEOCAT and the other is a netCDF generated from WRF model output.

  30. ”Deep-Dive” Validation Tools • Cloud Top Temperature from ABI cloud algorithm (from Glance)

  31. ”Deep-Dive” Validation Tools • Cloud Top Temperature from WRF (from Glance)

  32. ”Deep-Dive” Validation Tools • Difference image, WRF output – Proxy L2 (from Glance)

  33. ”Deep-Dive” Validation Tools • Statistics from Glance: Numerical Comparison Statistics correlation*: 0.8433 diff_outside_epsilon_count*: 965566 diff_outside_epsilon_fraction*: 1 max_diff*: 98.64 mean_diff*: 12.69 median_diff*: 9.739 mismatch_points_count*: 1359122 mismatch_points_fraction*: 0.4191 perfect_match_count*: 0 perfect_match_fraction*: 0 r-squared correlation*: 0.7112 rms_diff*: 18.97 std_diff*: 14.11

  34. Ideas for the Further Enhancementand Utility of Validation Tools • GRAVA (GOES-R Advanced Validation Automation) • Greater automation of validation tasks. • An extension of GRAFIIR to optimize field campaign work for GOES-R • Beginning with a planned analysis of field campaigns to asses how to optimize utilization of them for GOES-R Cal/Val. • AIT Framework at CIMSS to expand our access to more products. • Converge on file types for inputs and outputs (avoid reliance on McIDAS AREA files for input) • Converge on calibration methods (adopt the Imagery Team file format) • Converge on navigation (Fixed Grid Format) • Merging the collocation capabilities with Glance should make validation easier for a host of algorithm teams. • The GRAFIIR Team’s use of Glance thus far has been fairly limited in that comparisons are normally done as a before/after look at instrument effects on product performance. But Glance can be used to compare to a “truth” dataset that is not a prior run of the product.

  35. Summary • Glance • The GRAFIIR team has helped to develop a validation tool which can be used for both routine validation and as a deep-dive tool. • In analyzing multiple ABI waivers to date, the GRAFIIR team has been doing both L1 radiance and L2 product validation already using Glance. • Glance can meet the needs of many product algorithm teams. • GRAVA • A future extension of GRAFIIR for greater automation • Coordination of collocation, calibration/validation, field campaign and other data, L1 and L2+ ABI data and products, visualization, and Glance. • GOES-R AWG • The GRAFIIR toolset is not a replacement for science expertise. • Scientists should spend less time worrying about: • File formats • Collocation • The GRAFIIR team can help!

  36. Summary More Information • How to Install Glance • Glance Documentation • Eva Schiffer <evas@ssec.wisc.edu> • Ray Garcia <rayg@ssec.wisc.edu> • Mat Gunshor <matg@ssec.wisc.edu>

More Related