1 / 24

Rare-Event Simulation Splitting for Variance Reduction

Rare-Event Simulation Splitting for Variance Reduction. IE 680, Spring 2007 Bryan Pearce. What is a Rare Event?. B. A. Ω. Formal Problem Definition. Splitting: the beginning. Importance function h Measures “how close” a state is to the rare event

emil
Télécharger la présentation

Rare-Event Simulation Splitting for Variance Reduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Rare-Event SimulationSplitting for Variance Reduction IE 680, Spring 2007 Bryan Pearce

  2. What is a Rare Event? B A Ω

  3. Formal Problem Definition

  4. Splitting: the beginning • Importance function h • Measures “how close” a state is to the rare event • Divide the intermediary state space into m ‘levels’ according to the thresholds l0, l1, …, lm

  5. h(x) = l0 = l1 = l2 = l3 = lm = l

  6. More formally:

  7. How to choose h? • Defining the importance function can be difficult. • Ideally our h should reflect: • The most likely path to the rare event • pk(x) = pk (indep. of state) • pk = p (indep. of level) • Presumes apriori knowledge of the system.

  8. First sub-interval MC Sim N0 independent chains. R0 reach l1. h l1 0 time

  9. Second sub-interval: Splitting MC Sim N1 chains, splitting from the previously achieved threshold states. R1 reach l2. h l2 …and so on for each sub-interval l1 0 time

  10. Notation

  11. Splitting policy – fixed splitting • Each chain that reaches level k is cloned ck times. • Nk will be random for each level k > 0 • Stratified sampling from the entrance distribution of level k

  12. Splitting policy – fixed effort • Fix Nk in advance. Choose the states represented in the entrance distribution by: Random assignment • Choose these Nk states randomly from the entrance distribution Fixed assignment • Choose an equal quantity of each state • Better stratification

  13. Pros & cons of splitting method • Fixed splitting – • Asymptotically more efficient under optimal conditions • Efficiency very sensitive to splitting factor ck • Fixed effort • Higher memory requirement • More robust

  14. Efficiency Our hope is that splitting will allow our variance to shrink faster than our computational time grows. This has indeed been shown to be true in many cases.

  15. Truncation - Motivation Simulation time spent reaching l1 h l4 l3 l2 l1 0

  16. Simple (biased) Truncation Choose β: • If a chain falls below the level lk-β then terminate. • Estimator becomes biased, moreso with small β. • Large β does not reduce workload very much. • RESTART

  17. h l4 l3 l2 l1 0 } β = 2 Terminate

  18. Unbiased Truncation Use the ‘Russian Roulette’ principle: The first time a chain ‘down-crosses’ a level threshold it dies with probability (1 – 1/rk,j). If it survives then its weight is increased by a factor of rk,j. (these rk,j are user-defined and determine the ‘strength’ of the truncation)

  19. How to choose the rk,js • The selection of the rk,js at each level of the process will control the aggressiveness of the truncation policy. • A tried-and-true value:

  20. h l4 l3 l2 l1 0 Dies with prob. (1 – 1/r3,2) Weight increases by a factor of r3,2 if the chain survives.

  21. Russian Roulette, cont. • There are various methods by which to use the chain weights can compensate for this truncation bias. • Probabilistic • Tag-based • Periodic

  22. Truncation w/o weights • Chain weighting truncation methods can inflate the variance of our gamma estimator. • We can avoid this problem by allowing our chains to probabilistically re-split upon re-achieving previously achieved goals.

  23. Conclusions and notes • Potential performance • With γ = 10-20, Var[MC] = 10-23 while Var[split] = 10-41 • Poorly-behaved systems • Inefficient to apply

  24. References L’Ecuyer, P., V. Demers, B. Tuffin. 2006. Splitting for rare-event simulation. Glasserman, P., P. Heidelberger,and T. Zajic. 1998. A large deviations perspective on the efficiency of multilevel splitting. L’Ecuyer, P., V. Demers, B. Tuffin. 2006. Rare-events, splitting, and quasi-Monte Carlo. Garvels, M. J. J. 2000. The splitting method in rare event simulation.

More Related