1 / 110

ShakeAlert Testing Procedure Discussion Philip Maechling 26 March 2010

ShakeAlert Testing Procedure Discussion Philip Maechling 26 March 2010. ShakeAlert Testing. SCEC has the opportunity to define a testing approach for the CISN ShakeAlert System. Testing approach should be consistent with USGS interests in the ShakeAlert System.

selina
Télécharger la présentation

ShakeAlert Testing Procedure Discussion Philip Maechling 26 March 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ShakeAlert Testing Procedure Discussion Philip Maechling 26 March 2010

  2. ShakeAlert Testing SCEC has the opportunity to define a testing approach for the CISN ShakeAlert System. • Testing approach should be consistent with USGS interests in the ShakeAlert System. • CTC effort should provide a longitudinal study of ShakeAlert Capabilities • Science-oriented testing focus (rather than engineering focus) is more consistent CSEP model • CTC effort provides SCEC with an opportunity to demonstrate the general capabilities of CSEP infrastructure other problems.

  3. Scale of SCEC CTC Activity CTC plan must be implemented within funded level of effort approximately 12 hours per month. • SCEC should establish scientific framework for ShakeAlert Testing • Initial testing approach should be simple • Initial testing should provide value to USGS and ShakeAlert developers • Initial Testing should communicate value of EEW testing to SCEC community and CISN

  4. Bridging the gap between science and engineering: avenues for collaborative research Christine Goulet, PhD Sr Geotechnical Engineer, URS Lecturer, UCLA christine_goulet@urscorp.com 2009 Annual Meeting: Palm Springs, CA

  5. Conclusion • Collaboration is an outcome-driven process (mission, vision, etc.) • We can benefit from collaboration if we commit to Spend time and effort in the process Keep an open mind Keep a eye on the goal • Benefit for engineers A better understanding and integration of seismological phenomena = better design • Benefit for scientists The application and dissemination of their results into the built world = greater impact

  6. On collaboration Collaboration is a process through which people work together, pooling their ressources to achieve a shared desired result or outcome. The collaboration process: Involves a catalyst (common interest, reaction to an event) Provides a broader insight into a problem and its potential solutions Allows a knowledge transfer by which each participant’s specialty benefits the group (knowledge optimization) Gives access to new problems and ideas Successful collaboration requires: Effective communication A clearly defined goal or vision Collaboration is an outcome-driven process

  7. On communication To communicate is human… …it does not mean we’re naturally good at it. Key elements for a better communication: • Sharing a common language • Saying what you mean • Developing improved active listening skills • Using feedback techniques (“What I understood is… Is this correct?”) • Keeping an open mind

  8. Engineers Scientists Earthquakes Design a product Understanding A shared vision? Group Interest Goal/ desired outcome

  9. Geologists & Seismologists Seismologists & Engineers Geotechnical Engineers & Seismologists Geotechnical & Structural Engineers Engineers, loss modelers Interface(s) • Source effects • Fault mechanism, magnitude and location • Recurrence models • Travel paths • Site effects • Wave propagation to the surface • Basin effects • Topographic effects • Directivity • Structural response • Including foundation • Loss analysis

  10. Establish Testing Emphasis with USGS and CISN Development Groups

  11. Problems in Assessing Forecasts ShakeAlert Forecast Evaluation Problems: • Scientific publications provide insufficient information for independent evaluation • Data to evaluate forecast experiments are often improperly specified • Active researchers are constantly tweaking their codes and procedures, which become moving targets • Difficult to find resources to conduct and evaluate long term forecasts • Standards are lacking for testing forecasts against reference observations

  12. Long- and short-term operational earthquake forecasting in Italy: the case of the April 6, 2009, L'Aquila earthquake Warner Marzocchi INGV, Istituto Nazionale di Geofisica e Vulcanologia, Rome, Italy In collaboration with: Anna Maria Lombardi (INGV), Gordon Woo (RMS), Thomas van Stiphout (ETH), Stefan Wiemer (ETH)

  13. Design of Testing Experiment

  14. Additional Goal for Testing The EEW tests we implement should be valid for CISN and any other EEW implementation including commercial systems and community contribution-based systems.

  15. Design of an Experiment Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts: • Exact definition of testing area • Exact definition of a forecast • Exact definition of input data used in forecasts • Exact definition of reference observation data • Measures of success for forecasts

  16. Design of an Experiment Design of EEW Science Testing introduces elements that CSEP has not had to consider • Must decide whether to test both forecast and “alerts” • Different algorithms produce different forecasts • Some (e.g. On-site) produce site-specific information (PGA), event magnitude, but no origin time or distance to event • Some (e.g. Vs) produces full event parametric information. • Some (e.g. Elarms) produce site specific ground motion estimates on a regular grid. • Some produce single values (On-site) • Some produce time-series with updates (Vs,Elarms)

  17. Design of an Experiment Design of EEW Science Testing introduces elements that CSEP has not had to consider • More difficult to determine information used in forecast especially with Bayesian approach is fully implemented • More difficult to determine what data is used in forecast at any time. • Time-basis of forecast (forecast term e.g. 60 second …1 second) varies by event • Greater interest in summary of performance on an event by event basis. Should support push-based distribution of results after significant events.

  18. Design of an Experiment Example of stations that could contribute to forecasts.

  19. The 1-day forecasts (the palette represents the rate of M 4+) Daily forecasts released at 8:00 AM (no overlaps)

  20. Testing the forecasts (using M 2.5+ events) N-test Spatial test

  21. 2. GMPE prediction, distance-scaling term 1 C B ( 2 0 0 8 ) P G A O r i g i n a l ) g 0.1 ( S A , O r i g i n a l T = 1 s a S S A , O r i g i n a l 0.01 T = 1 0 s S t r i k e - s l i p E Q V = 5 4 0 m / s S 3 0 0.001 1 10 100 R ( k m ) r u p Image: J. Stewart, L. Star

  22. Design of an Experiment Propose Time Dependent tests as forecasts before origin (or peak ground motion at site) • Could produce a peak ground motion map at origin time and later. Forecasts produce ground motion maps and any regions that have not received peak ground motion contribute to the forecast. Series of forecast maps for each algorithm as they produce them. Any regions in any maps that have not experienced their time of PGV is credited. Map regions will fall over time eventually reaching zero forecasts to be evaluated for the event. • For next test maybe we can ignore whether sites receive a warning. • Plot the forecast by time like slide 15 with improvement in forecast with shorter forecast times.

  23. First test is to reproduce the ShakeMap

  24. Design of an Experiment • Map of reporting stations used in Shakemap

  25. Design of an Experiment Propose Time Dependent tests as forecasts before origin (or peak ground motion at site) • Introduce the use of first provided estimate as important measure. • Introduce use of announcers as a new system that provides forecasts. Announcers would be easy to add and easy to remove. • Which side of the interface is the probability set? They provide forecasts and probabilities, or do we set tests at probability level and let them figure out whether it meets the specified level.

  26. Point to bring home on short-term forecasts • We perform daily aftershock forecasts in real-time. From the test on the first months, the forecast seems well calibrated, describing correctly the space-time evolution of the aftershock sequence. • The same model (retrospectively) detected an increase in probability before the main event; the (daily) probability did not reach a value of 1%.

  27. Introducing the problem Public officials Scientists The Challenge is for scientists to articulate uncertainty without losing credibility and to give public officials the information they need for decision-making this requires to bridge the gap between scientific output (probability) and the boolean logic (YES-NO) of decision-makers

  28. Design of an Experiment Design of EEW Science Testing introduces elements that CSEP has not had to consider • CISN seems to be distinguishing event module (produces event parameters) and user module which produces site-specific ground motion estimates • User modules are likely to vary by tolerance for false alarms and by conversion from location/magnitude to site-specific ground motion estimates. • I recommend we make it easy to add new forecast sources, and remove old ones so that we can support experimentation on forecasters by CISN.

  29. New Waveform Processing Library Development of a new Waveform Processing Library (based on the same idea already used by the On-site algorithm): The old framework used GCDA (Generic Continuous Data Area) to store waveforms which slowed down the read/write access to the waveforms and overall processing thread. To avoid that problem the new version will use internal memory buffers and work in a single process multi-threaded environment.

  30. Decision Module (DM) • The Decision Module is expected to • - receive short, independent messages from the three Event Detectors • - be running on different machines than the Event Detectors. • The passing of messages between the three Event Detectors to the DM as well as the broadcast of the outputs of the DM to users will likely be based on Apache ActiveMQ(public-subscribe messaging system; asynchronous message passing and persistent message storage). • Preliminary API is almost finished • challenging: association & up-dates of messages • up-date DM event, if possible; if misfit is too large, disassociate all messages of the event and create a new DM event (similar to Binder) • requires that the On-site algorithm provides eventIDs (done)

  31. Single sensor Sensor network Sensor network τc-Pd On-site Algorithm Virtual Seismologist (VS) ElarmS • Task 1: • increase reliability Decision Module (Bayesian) Alert CISN Shake • - most probable • … Mw • … location • … origin time • … ground motion • and uncertainties • probability of false trigger, i.e. no earthquake • CANCEL message if needed Bayesian approach up-dated with time

  32. Single sensor Sensor network Sensor network τc-Pd On-site Algorithm Virtual Seismologist (VS) ElarmS • Task 1: • increase reliability • Task 2: • demonstrate & • enhance Decision Module (Bayesian) Alert CISN Shake feed-back CISN EEW Testing Center USER Module - Single site warning - Map view Test users • predicted and observed • ground motions • available warning time • probability of false alarm • …

  33. Methodology development slide courtesy of Holly Brown

  34. World Meteorological Organization (WMO)Observing and Information Systems DepartmentWMO Information System (WIS) Identifiers and the Common Alerting Protocol (CAP) Presented 23 June 2009 at Joint Meeting of MeteoAlarmand the WIS CAP Implementation Workshop on Identifiersby Eliot Christian <echristian@wmo.int>

  35. Outline • What is CAP? • Why and How would MeteoAlarm use CAP? • What are the issues with Identifiers? Common Alerting Protocol (CAP)

  36. What is CAP? The Common Alerting Protocol (CAP) is a standard message format designed for All-Media, All-Hazard, communications: • over any and all media (television, radio, telephone, fax, highway signs, e-mail, Web sites, RSS "Blogs", ...) • about any and all kinds of hazard (Weather, Fires, Earthquakes, Volcanoes, Landslides, Child Abductions, Disease Outbreaks, Air Quality Warnings, Beach Closings, Transportation Problems, Power Outages, ...) • to anyone: the public at large; designated groups (civic authority, responders, etc.); specific people Common Alerting Protocol (CAP)

  37. Structure of a CAP Alert CAP Alert messages contain: • Text values for human readers, e.g., "headline", "description", "instruction", "area description", etc. • Coded values useful for filtering, routing, and automated translation to human languages Common Alerting Protocol (CAP)

  38. Filtering and Routing Criteria • Date/Time • Geographic Area(polygon, circle, geographic codes) • Status(Actual, Exercise, System, Test) • Scope(Public, Restricted, Private) • Type(Alert, Update, Cancel, Ack, Error) Common Alerting Protocol (CAP)

  39. Filtering and Routing Criteria • Event Categories(Geo, Met, Safety, Security, Rescue, Fire, Health, Env, Transport, Infra, Other) • Urgency: Timeframe for responsive action(Immediate, Expected, Future, Past, Unknown) • Severity: Level of threat to life or property(Extreme, Severe, Moderate, Minor, Unknown) • Certainty: Probability of occurrence(Very Likely, Likely, Possible, Unlikely, Unknown) Common Alerting Protocol (CAP)

  40. Typical CAP-based Alerting System Common Alerting Protocol (CAP)

  41. http://www.weather.gov/alerts

  42. Existing proposals for EEW Testing Agreements

  43. Design of an Experiment We propose that initial CTC testing supports science groups first, engineering second. • Accuracy and timeliness of event-oriented parameters (location, magnitude) • Accuracy and timeliness of ground motion forecasts (pgv, psa, intensity) for both site-specific and grid-based site specific forecasts

  44. Design of an Experiment Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts: • Exact definition of testing area • Exact definition of a forecast • Exact definition of input data used in forecasts • Exact definition of reference observation data • Measures of success for forecasts

  45. Design of an Experiment Are the 3 CSEP regions valid for EEW ? • Region Under Test • Catalog Event Region • Buffer to avoid catalog issues

  46. Design of an Experiment Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts: • Exact definition of testing area • Exact definition of a forecast • Exact definition of input data used in forecasts • Exact definition of reference observation data • Measures of success for forecasts

  47. Design of an Experiment Caltech Tauc-Pd RT/AL: For each triggered station ≤ Dist-max, send one alert of: • M-est with Talert and Talgorithm • PGV-est with Talert and Talgorithm For each M ≥ M-min, send one alert of: • Number of reporting and non-reporting stations ≤ Dist-max as a function of Talert and Talgorithm UC Berkeley ElarmS RT and ETH VS: For each triggered event, send one alert of: • M-est as a function of Talert • Loc-est as a function of Talert • PGA-est at each station ≤ Dist-max without S-wave arrival as a function of Talert • PGV-est at each station ≤ Dist-max without S-wave arrival as a function of Talert • Number of reporting and non- reporting stations ≤ Dist-max as a function of Talert

  48. Design of an Experiment Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts: • Exact definition of testing area • Exact definition of a forecast • Exact definition of input data used in forecasts • Exact definition of reference observation data • Measures of success for forecasts

  49. Design of an Experiment Input to forecasts are based on CISN real-time data • If system performance (e.g. missed events) are to be evaluated, CTC will need station-list in use at any time • Existing CISN often has problems keeping track of which stations are being used in forecasts

  50. Design of an Experiment Many CSEP testing principles are applicable to CISN EEW Testing. The following definitions need to be made to evaluate forecasts: • Exact definition of testing area • Exact definition of a forecast • Exact definition of input data used in forecasts • Exact definition of reference observation data • Measures of success for forecasts

More Related