1 / 20

Evaluation Results: What’s Real, What Matters?

Evaluation Results: What’s Real, What Matters?. Presented at: National Low Income Energy Conference Washington, D.C. June 15, 2006 Presented by: Michael Blasnik M. Blasnik & Associates · Boston, MA michael.blasnik@verizon.net. Why Evaluate?. Assess overall energy savings and impacts

abie
Télécharger la présentation

Evaluation Results: What’s Real, What Matters?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Evaluation Results:What’s Real, What Matters? Presented at: National Low Income Energy Conference Washington, D.C. June 15, 2006 Presented by: Michael Blasnik M. Blasnik & Associates · Boston, MA michael.blasnik@verizon.net

  2. Why Evaluate? • Assess overall energy savings and impacts • compare to prior years, other programs • Assess performance of specific measures • Find out what works (or when it works) and what doesn’t • Identify opportunities for improvement • based on energy savings analysis • based on technical review & field inspections • based on interviews and surveys of stakeholders • Demonstrate value of program • to funders and others

  3. Types of Evaluation • Impact Evaluation • Assesses impact of program in meeting objectives • Energy savings (and electric baseload and water?) • explore patterns in savings: by measure, house type, provider, etc • Other program impacts “non-energy benefits”: bill payment, health and safety, environmental, economic/ job creation, etc. • Process Evaluation • Assesses aspects of program other than direct impacts • technical review of procedures, field tools, training, QC, etc. • field visits using in-depth diagnostics • administrative/logistical systems of program • survey clients to get feedback on program implementation, marketing, education, etc.

  4. Measuring Energy Savings • Energy usage changes due to program, but also due to weather, behavior, and other changes in the home • Evaluation goal: measure change due to the program • Adjust usage for weather variation from average year • Degree day methods (e.g. PRISM) work well for heating, but big changes still cause problems. Works worse for cooling loads. • Use statistics to adjust for other non-program factors • Large groups average out the random non-program changes • Comparison group should reflect trends that don’t average out (including potential bias in weather adjustment) • Net Program savings = Participant savings minus Comparison savings • Must be very similar to participants to work, often use later participants • Comparison group usage typically changes less than 3%

  5. Energy Savings Evaluation Methods:Billing Data Analysis • Typically full year pre/post with a comparison group • evaluate each home, use break-outs to explore patterns • Large samples allow detailed assessments • Need cooperation and data from utility companies • Costs vary widely: $20k - $150k+ • Depends on data collection (# and cooperation of utilities) and depth of research questions (e.g., non-energy benefits) • Can save $ by using in-house staff for data collection • Some programs looking at automating the analysis in-house. • Fancy pooled econometric models are also used by some • “Black box” nature and statistical jargon mystify many, limit discussion and review, make results impenetrable • often learn little about patterns in savings, but may be useful cross-check

  6. Energy Savings Evaluation Methods:Run-Time Metering (ASAP) • Meter daily usage for a few winter weeks pre & post treatment • Weather-adjust usage using software such as MN’s DESLog • Provides quick feedback, avoids problems with utility data collection, changes in household less likely in shorter period • Drawbacks • weather/timing problems common especially in milder climates, • labor intensive - usually small samples and no comparison group • No assessment of non-heating savings • Excellent way to compare two treatment approaches • Allows a true experiment with random treatment assignment • Poor way to evaluate overall program • metered jobs never seem to represent average work on average houses – everyone knows who is being evaluated

  7. Energy Savings Evaluation Methods:Projected Savings • “Calculate” savings from energy audit software or engineering algorithms • Not a reliable evaluation method • Measured savings are typically just 50%-70% of projected savings • Flaws in assumptions, inputs, and the engineering models themselves all tend to over-predict savings • If you want to learn about actual program performance you need to have actual usage data

  8. Learning from Energy Savings Results • Average savings don't tell you much • Patterns of savings often matter more in trying to improve program and identify what works best • Group savings break-outs • average savings by measure (or combo of measures), house type, provider are interesting but • simple group comparisons don't show cause and effect because other factors may also differ between groups • Statistical modeling (regression) • tries to account for multiple factors at once to provide better estimate of what drives savings • but be cautious of "black box" fancy statistics

  9. Impact Evaluation: Problems (1) • Bias Issues • Impact evaluations are observational studies, not designed experiments, so bias should be expected • Reliability depends most strongly on having representative participant and comparison groups • Houses evaluated may not be typical of program • less likely to include houses with: renters, occupant turnover, shut off or intermittent utility service, supplemental heat. • more likely to include homeowners, stable households, seniors • Attrition bias has been documented and needs to be assessed • Comparison group may not be comparable to participants • HEAP population is different from WAP (esp. owner/renter) • Even the use of future participants isn’t a guarantee • Example: many studies have shown HEAP grants decline after WAP. Problem with this analysis is that WAP and HEAP are often applied for at the same time, so HEAP participation is exceptionally high in WAP application year but reverts back in following year.

  10. Impact Evaluation: Problems (2) • Interpretation Issues • Results for any one house are unreliable - stuff happens • Savings vary between groups of houses for many reasons • Differences in savings opportunities (housing or appliance stock), work selected, and work quality all play a role • Only field observations by people with expertise can assess quality • Evaluator Issues • Good evaluations recognize and try to deal with potential bias and don’t over-state the certainty of results • Many evaluators have little building science knowledge, so you need to be wary of strange results that could be due to methodology errors/problems • There are often many technical mistakes that go unnoticed or unchallenged, often hidden behind complex jargon

  11. WAP Gas Savings Results average therms/year/home

  12. WAP Gas Savings Results

  13. WAP Gas Savings: Mobile Homesusage and savings in therms/year/home • Mobile home gas savings much lower than site-built homes: avg. <100 therms in most recent 3 studies • Smaller, newer, better insulated than site-built leads to lower pre-treatment usage • Treatments more difficult, health & safety needs substantial • Untapped potential for ducts? Electric baseload?

  14. WAP Electric Heating Savingsusage and savings in kWh/year/home • Savings 9%-13%, 2,000-3,000 kWh • Generally newer homes with fewer insulation and air sealing opportunities than gas-heated homes. More duct sealing opportunities?

  15. Electric Savings: Baseloadusage and savings in kWh/year/home • Savings ~ 12%, but varies from 772-1750 kWh depending on usage • Bill savings payback generally quicker than most gas heating measures • Refrigerators, lighting produce most of savings, except some hot water (and fuel switching in WI)

  16. Lessons Learned: Savings and Pre-Treatment UsageYou can’t save what you don’t use(graph source: Ohio HWAP ’94)

  17. Lessons Learned: What Works (1) • Wall Insulation • big savings, average ~190 th/yr or 0.2 th/yr/ft², usually most cost-effective major measure (based on dense-pack states) • Heating System Replacement • 100-180 th/yr, 12%-20% of heating, higher savings for high-efficiency units (and costs have dropped over time for them) • often not very cost-effective due to high cost ($2500+ in many areas), done as safety measure • Attic Insulation • Average savings of 65-150 th/yr have been found • Savings vary widely with existing conditions and work quality • Refrigerator replacement (and freezer replacement, and removals) • 700-900 kWh/yr avg. savings, very cost-effective in most places • Duct Sealing in attics, garages & crawlspaces • 7%-13% savings for reasonable cost, quite cost-effective

  18. Lessons Learned: What Works (2) • Blower-door guided air sealing • Savings average 50-100 th/yr or ~7 th/yr/100CFM50, 70%-100% of projected • Hot water measures (showerheads, aerators, wraps, etc) • 45 th/yr for combined measures (Iowa) =60% projected. Many devices are already low flow and people overstate showering time. • Lighting • 25-50 kWh/yr/bulb, worthwhile but only about 50% of projected savings due to over-stated hours of use, premature failure and removal • Set-back thermostats? • utility program evaluations have found 4%-7% savings (70-100 th/yr). Excellent payback but do savings last?

  19. Lessons Learned: What Doesn’t Work? • Window and Door replacement • Savings very small, negative correlation with savings in 1 study • Duct Sealing in Basements • Multiple studies show 0%-3% savings. Still fix big holes & IAQ. • Floor Insulation • Savings <20 th/yr in Ohio & Iowa, but warmer floors. Duct issue? • Heating System Tune-ups • OK for safety reasons, no savings? • Mobile Home Weatherization • Cost-effective savings? Ducts? Or focus on electric baseload? • Energy Education?? – the jury is still out • Works well in pilots, hard to replicate large scale. • At least as hard to do right as the technical stuff • Low cost makes it most likely worthwhile in any program.

  20. Lessons Learned: Big Picture • Measured savings are typically about 50%-70% of the savings projected by energy audit software (e.g. NEAT) • Don’t blame the occupants: shortfall in savings is not due to behavior changes • Poor modeling assumptions and complex interactions reduce savings from many measures including insulation, air sealing, and even lighting and hot water • Data collected by auditors and reported by clients often overstate savings • Higher savings are associated with programs and agencies that: • Treat high use homes and target resources to them • Use a diagnostic approach (blower doors, etc) • Install wall insulation frequently • Have additional funding from utilities and/or gov’t, often to replace heaters • Lower savings are associated with programs and agencies that: • View WAP as a housing program, replacing windows & doors, doing repairs • Are in a mild climate and/or have low pre-treatment usage levels • Have a newer housing stock and/or many mobile homes

More Related