1 / 32

By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological Service of Canada

Variability of Temperature Measurement in the Canadian Surface Weather and Reference Climate Networks. By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological Service of Canada. Background – Regional Procurement.

gamba
Télécharger la présentation

By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological Service of Canada

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Variability of Temperature Measurement in the Canadian Surface Weather and Reference Climate Networks By Gary Beaney, Tomasz Stapf, Brian Sheppard Meteorological Service of Canada

  2. Background – Regional Procurement • When automation of weather stations began in Canada in the late 1980’s, there was no specifically designated “climate” network • Had a network of “primary” stations recording various meteorological parameters

  3. Background – Regional Procurement • When automation of weather stations began in Canada in the late 1980’s, there was no specifically designated “climate” network • Had a network of “primary” stations recording various meteorological parameters • When sensors were procured for this network they were done so by five distinct Environment Canada Regions • Pacific & Yukon; Prairie and Northern; Ontario; Quebec; Atlantic • Resulted in a wide variety of instruments throughout the country all measuring the same parameter • Many of these “primary” stations today are part of Environment Canada’s Surface Weather and Climate Networks.

  4. Background – Sensors in Use • National survey was undertaken to catalogue the various sensors being used to measure temperature in what are now considered Canada’s Surface Weather and Climate Networks • Seven different sensors were found to be the predominant source of temperature data

  5. Background – Sensors in Use • National survey was undertaken to catalogue the various sensors being used to measure temperature in what are now considered Canada’s Surface Weather and Climate Networks • Seven different sensors were found to be the predominant source of temperature data • In addition to sensor type, differences were reported with respect to shield type and shield aspiration

  6. Background – Sensors in Use • Eleven predominant sensor types/configurations were found to be in use in the Canadian Surface Weather and Climate Networks: • CSI 44002A Wooden Screen (WS) Non-Aspirated (NA) • CSI 44212 Wooden Screen (WS) Aspirated (A) • CSI HMP35C Gill 12-Plate Screen (G12) Non-Aspirated (NA) • CSI HMP45C Gill 12-Plate Screen (G12) Non-Aspirated (NA) • CSI HMP45C212 Wooden Screen (WS) Aspirated (A) • CSI HMP45C212 Wooden Screen (WS) Non-Aspirated (NA) • CSI HMP45C212 Gill Screen (G) Aspirated (A) • CSI HMP45C212 Gill 12-Plate Screen (G12) Non-Aspirated (NA) • CSI HMP45CF Wooden Screen (WS) Non-Aspirated (NA) • CSI HMP45CF Gill 12-Plate Screen (G12) Non-Aspirated (NA) • CSI PRT1000 Wooden Screen (WS) Non-Aspirated (NA)

  7. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Purpose: OperationalComparability X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used • Is a sensor’s reading of temperature close to the truth? = • Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? • Is a sensor model consistent in its ability to measure temperature over a range of different temperatures?

  8. Data - Establishing a “true” Temperature Reference • Used average of three YSI SP20048 sensors as reference • Each sensor was calibrated and associated corrections were applied • Three reference temperature sensors installed in a triangle formation in Aspirated Stevenson Screens • Average of the three taken to represent the “true” temperature in the middle of the triangle Reference Sensor 1 Reference Temperature Reference Sensor 2 Reference Sensor 3 • Only instances in which all three reference temperature sensors agreed to within 0.5oC were used in the analysis

  9. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Purpose: OperationalComparability X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used • Is a sensor’s reading of temperature close to the truth? = • Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? FunctionalPrecision X ai = ith measurement made by one system X bi = ith simultaneous measurement made be an identical system N = number of samples used = • Is a sensor model consistent in its ability to measure temperature over a range of different temperatures?

  10. Data - Sensors Under Test • CSI 44002A Wooden Screen (WS) Non-Aspirated (NA) • CSI 44212 Wooden Screen (WS) Aspirated (A) • CSI HMP35C Gill 12-Plate Screen (G12) Non-Aspirated (NA) • CSI HMP45C Gill 12-Plate Screen (G12) Non-Aspirated (NA) • CSI HMP45C212 Wooden Screen (WS) Aspirated (A) • CSI HMP45C212 Wooden Screen (WS) Non-Aspirated (NA) • CSI HMP45C212 Gill Screen (G) Aspirated (A) • CSI HMP45C212 Gill 12-Plate Screen (G12) Non-Aspirated (NA) • CSI HMP45CF Wooden Screen (WS) Non-Aspirated (NA) • CSI HMP45CF Gill 12-Plate Screen (G12) Non-Aspirated (NA) • CSI PRT1000 Wooden Screen (WS) Non-Aspirated (NA)

  11. Data - Sensors Under Test • CSI 44002A Wooden Screen (WS) Non-Aspirated (NA) A • CSI 44002A Wooden Screen (WS) Non-Aspirated (NA) B • CSI 44212 Wooden Screen (WS) Aspirated (A) A • CSI 44212 Wooden Screen (WS) Aspirated (A) B • CSI HMP35C Gill 12-Plate Screen (G12) Non-Aspirated (NA) A • CSI HMP35C Gill 12-Plate Screen (G12) Non-Aspirated (NA) B • CSI HMP45C Gill 12-Plate Screen (G12) Non-Aspirated (NA) A • CSI HMP45C Gill 12-Plate Screen (G12) Non-Aspirated (NA) B • CSI HMP45C212 Wooden Screen (WS) Aspirated (A) A • CSI HMP45C212 Wooden Screen (WS) Aspirated (A) B • CSI HMP45C212 Wooden Screen (WS) Non-Aspirated (NA) A • CSI HMP45C212 Wooden Screen (WS) Non-Aspirated (NA) B • CSI HMP45C212 Gill Screen (G) Aspirated (A) A • CSI HMP45C212 Gill Screen (G) Aspirated (A) B • CSI HMP45C212 Gill 12-Plate Screen (G12) Non-Aspirated (NA) A • CSI HMP45C212 Gill 12-Plate Screen (G12) Non-Aspirated (NA) B • CSI HMP45CF Wooden Screen (WS) Non-Aspirated (NA) A • CSI HMP45CF Wooden Screen (WS) Non-Aspirated (NA) B • CSI HMP45CF Gill 12-Plate Screen (G12) Non-Aspirated (NA) A • CSI HMP45CF Gill 12-Plate Screen (G12) Non-Aspirated (NA) B • CSI PRT1000 Wooden Screen (WS) Non-Aspirated (NA) A • CSI PRT1000 Wooden Screen (WS) Non-Aspirated (NA) B

  12. Attempt to quantify the variability of temperature measurement in the Canadian Surface Weather and Climate Networks. Purpose: OperationalComparability X ai = ith measurement made by one system X bi = ith simultaneous measurement made by another system N = number of samples used • Is a sensor’s reading of temperature close to the truth? = • Is a sensor model consistent in its ability to measure temperature from one identical sensor to another? FunctionalPrecision X ai = ith measurement made by one system X bi = ith simultaneous measurement made be an identical system N = number of samples used = • Is a sensor model consistent in its ability to measure temperature over a range of different temperatures? • Test data was divided into three categories based on reference temperature • Reference temperature ≤ -5oC • Reference temperature > -5oC and ≤ 5oC • Reference temperature > 5oC

  13. Data - Test Site • Instruments installed at Environment Canada’s Centre for Atmospheric Research Experiments • Located approximately 70 km NW of Toronto, Ontario

  14. Data - Sensors Under Test • Experiment was run from December, 2002 to June, 2003. • Minutely data was collected from all three reference sensors and all 22 sensors under test • In order to maintain a consistent dataset for analysis, if any sensor under test was missing a minutely value, the values for that minute were removed for all other sensors under test

  15. Results

  16. Results – Operational Comparability Scores (oC) ≤ -5oC > -5oC and ≤ 5oC > 5oC Operational Comparability Scores (oC)

  17. Results – Operational Comparability Scores (oC) ≤ -5oC > -5oC and ≤ 5oC > 5oC Operational Comparability Scores (oC) Best = 0.03 Best = 0.03 Best = 0.07 Worst = 0.23 Worst = 0.15 Worst = 0.29 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Range = 0.20 Range = 0.12 Range = 0.22

  18. Results – Operational Comparability Scores (oC) ≤ -5oC > -5oC and ≤ 5oC > 5oC Operational Comparability Scores (oC) Best = 0.03 Best = 0.03 Best = 0.07 Worst = 0.23 Worst = 0.15 Worst = 0.29 Avg. = 0.14 Avg. = 0.11 Avg. = 0.15 Range = 0.20 Range = 0.12 Range = 0.22

  19. Results – Percentage Frequency of Differences from Reference ≤ -5oC > -5oC and ≤ 5oC > 5oC Sensors with Best Operational Comparability Scores Percentage Frequency of Difference (%) Sensors with Worst Operational Comparability Scores

  20. Results – Percentage Frequency of Differences from Reference ≤ -5oC > -5oC and ≤ 5oC > 5oC Sensors with Best Operational Comparability Scores Percentage Frequency of Difference (%) 0.05% 0.05% 0.02% Sensors with Worst Operational Comparability Scores 15.79% 7.38% 12.43%

  21. Results – Differences from Reference Sensors with Highest and Lowest Operational Comparability Scores Difference between means = 0.34oC Difference between means = 0.21oC Difference between means = 0.37oC • Time series represent hourly differences from the reference temperature over the period of test

  22. Results – Functional Precision ≤ -5oC > -5oC and ≤ 5oC > 5oC Functional Precision Scores (oC)

  23. Results – Functional Precision ≤ -5oC > -5oC and ≤ 5oC > 5oC Functional Precision Scores (oC) Best = 0.04 Best = 0.03 Best = 0.06 Worst = 0.16 Worst = 0.12 Worst = 0.19 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Range = 0.12 Range = 0.09 Range = 0.13

  24. Results – Functional Precision ≤ -5oC > -5oC and ≤ 5oC > 5oC Functional Precision Scores (oC) Best = 0.04 Best = 0.03 Best = 0.06 Worst = 0.16 Worst = 0.12 Worst = 0.19 Avg. = 0.07 Avg. = 0.06 Avg. = 0.10 Range = 0.12 Range = 0.09 Range = 0.13

  25. Results – Difference from Reference Sensors with Highest and Lowest Functional Precision Scores ≤ -5oC > -5oC and ≤ 5oC > 5oC Difference between means = 0.05oC Difference between means = 0.002oC Difference between means = 0.15oC Difference between means = 0.3oC Difference between means = 0.3oC Difference between means = 0.3oC

  26. Conclusions • Purpose of Study– attempt to quantify the variability of temperature measurement Canadian Surface Weather and Climate Networks • Closeness to the “truth” • Wide range of Operational Comparability scores observed • Highest – 0.23oC • Lowest – 0.03oC • In worst case, over 15% of minutely differences from the reference > 0.5oC • Consistency from one identical sensor to another • Wide range of Functional Precision scores observed • Highest – 0.19oC • Lowest – 0.03oC • Temperature Dependence • PRT 1000 WS NA A – best operational comparability score in ≤ -5oC category • HMP45C212 G A A – best operational comparability score in > -5oC and ≤ 5oC category • 44002A WS NA A – best operational comparability score in > 5oC category

  27. Final Note – Future Instrument Procurement • In order to avoid such variability in the future, one temperature sensor will be procured by a central body and used at all stations throughout Canada • It has been proposed that the analysis methodology used in this study be used to select the best instruments for future procurements • Analysis will be undertaken at three different test sites representing significantly different climatologies • Should result in a more uniform measurement of temperature and other parameters across Canada.

  28. Questions?

  29. Worst Operational Comparability Score Best Functional Precision Score

  30. Results – Difference from Reference Mean (oC) ≤ -5oC > -5oC and ≤ 5oC > 5oC Difference Between Sensor Under Test and Reference (oC) • Values represent differences between means of each sensor under test and reference (SUT - Reference) • T-test was used to determine if the observed differences in means were significant at the 95% confidence level (all sensors highlighted in red)

  31. Results – Difference from Reference Mean (oC) ≤ -5oC > -5oC and ≤ 5oC > 5oC Difference Between Identical Sensors Under Test (oC) • Values represent absolute value of differences between means of identical sensors in identical configurations • T-test was used to determine if the observed differences in means were significant at the 95% confidence level (all sensors highlighted in red)

More Related