230 likes | 345 Vues
The PM2.5 Model Evaluation Workshop held on February 10, 2004, in Chapel Hill, NC, aimed to address model performance issues related to PM2.5 and Regional Haze, essential for State Implementation Plan (SIP) modeling. The workshop focused on enhancing the PM2.5 and Regional Haze modeling guidance based on participant discussions, which covered the latest developments in performance evaluations, a review of draft guidance, required documentation, and methods to assess model accuracy. Key takeaways included statistical metrics, operational evaluations, and performance goals for models used in air quality management.
E N D
PM2.5 Model Performance Evaluation- Purpose and Goals PM Model Evaluation Workshop February 10, 2004 Chapel Hill, NC Brian Timin EPA/OAQPS
Purpose • To discuss PM2.5 and Regional Haze model performance issues that are relevant to SIP modeling. • The discussions and information will be used to enhance the model performance evaluation section of the PM2.5 and Regional Haze modeling guidance.
Goals • For everyone in the community to learn more about the latest work on PM model performance evaluations • To gather enough information to be able to revise the guidance • To listen to opinions and recommendations
PM2.5 Model Performance Evaluation- What’s in the Modeling Guidance? PM Model Evaluation Workshop February 10, 2004 Chapel Hill, NC Brian Timin EPA/OAQPS
Contents • Status of guidance • What’s in the guidance • Review of Chapter 16- Model performance
Status of DRAFT Guidance • Draft “Guidance for Demonstrating Attainment of Air Quality Goals for PM2.5 and Regional Haze”, January 2001 • Living document - may be revised as needed and posted on EPA’s website http://www.epa.gov/scram001/guidance/guide/draft_pm.pdf • Will finalize guidance as part of PM2.5 implementation rule- 2004
What’s in the Guidance • Part I- Using Model Results • Attainment test • Annual PM2.5 NAAQS • 24 hr. PM2.5 NAAQS • Regional haze reasonable progress test • “Hot spot” modeling • Using weight of evidence • Data gathering needs • Required documentation
What’s in the Guidance- con’t • Part II- Generating Model Results • Conceptual description • Modeling protocol • Selecting a model(s) • Choosing days • Selecting domain & spatial resolution • Developing met inputs • Developing emissions inputs • Evaluating model performance (chapter 16) • Evaluating control strategies
Overview of Chapter 16 How Do I Assess Model Performance and Make Use of Diagnostic Analyses?
Model Performance- Introduction • How well is the model able to replicate observed concentrations of PM mass and its components (and precursors)? • How accurately does the model characterize sensitivity of changes in component concentrations to changes in emissions?
Types of Analyses • Operational • Statistics • Scatter plots • Time series plots • Diagnostic • Ratios of indicator species • Process analysis • Sensitivity tests
“Big Picture” Operational Evaluation • Graphical displays • PM2.5 and PM components • Time series plots • Scatter plots • Tile plots • Q-Q plots • Temporal resolution • Episodes, seasonal, annual
Operational Evaluation- Species • PM Species • PM2.5 mass • Sulfate • Nitrate • Mass associated with sulfate • Mass associated with nitrate • Elemental carbon • Organic carbon (organic mass) • Inorganic primary PM2.5 (IP) • Mass of individual constituents of IP
Operational Evaluation- Species • Gaseous Species • Ozone • SO2 • CO • NO2 • NOy • PAN • Nitric acid • Ammonia • Hydrogen peroxide
Evaluation- Statistical Metrics • Key question- How well does the model predict spatially averaged concentrations near a monitor which are averaged over the modeled days with corresponding monitored observations? • Basic metric- Normalized gross error • Averaged over monitor days • Greatest concern for good model performance at monitors that are exceeding the standards
Statistics In the Current Guidance • Normalized gross error • Normalized bias • Fractional error (means and standard deviation) • Fractional bias (means and standard deviation) • Aggregated statistics • Averaged over multiple sites
Calculation of Statistics- Issues • Many ways to calculate statistics • Averaging across days • Averaging across sites • Similar, but different metrics • Normalized mean error vs. mean normalized error • Low concentrations • Certain metrics are not appropriate when concentrations are very low
Performance Goals • “It is difficult to establish generally applicable numerical performance goals” • Model performance is not particularly important for components with small observed concentrations relative to other components • In a relative attainment test, a small observed component cannot have a large influence • “How good should a State expect performance of a model to be? Frankly, there is little basis for making recommendations at present (2001).”
Performance Goals • Expect performance for PM components to be worse than ozone • Ozone goals not appropriate • Numbers listed in guidance as example aggregated normalized gross error • Statistics averaged from several limited PM applications at the time (before 2001) • PM2.5 ~30-50% • Sulfate ~30-50% • Nitrate ~20-70% • EC ~15-60% • OC ~40-50%
Performance Goals • Relative proportions • Major components (> 30% of PM2.5) • Agree within +- 10% of relative portion • If sulfate is 50% of mass, then goal would be to predict sulfate that is 40-60% of total mass • Minor components • Agree within +- 5% of relative portion • Difficult to assess proportions if one component is way off (too high or too low)
Other Analyses • Analyses to address model response to emissions changes • Weekend/weekday emissions • Not sure if this is appropriate for PM • Ratios of indicator species • Many ratios developed for ozone chemistry • Several ratios exist for PM • NH4+NH3/HNO3+NO3+SO4 • Most PM ratio techniques require difficult to find trace gas measurements (e.g. NH3 and HNO3) • Retrospective analyses
Diagnostic Tests • Sensitivity analyses • Is model especially sensitive to an input or combination of inputs? • Initial and boundary conditions • Emissions inputs • Grid size and number of layers • Alternative met fields • Prioritize future data gathering • Assess robustness of a strategy • Prioritizing control efforts • Process analysis
Next Steps • Update modeling guidance • Metric definitions and calculations • Statistical benchmarks • Diagnostic analyses • Other analyses to test model’s relative response to emissions changes • Use workshop materials and discussion to help inform decisions • Looking for recommendations and opinions