1 / 49

Analysis of Simulation Results

Analysis of Simulation Results. Andy Wang CIS 5930-03 Computer Systems Performance Analysis. Analysis of Simulation Results. Check for correctness of implementation Model verification Check for representativeness of assumptions Model validation Handle initial observations

lane
Télécharger la présentation

Analysis of Simulation Results

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analysis of Simulation Results Andy Wang CIS 5930-03 Computer Systems Performance Analysis

  2. Analysis of Simulation Results • Check for correctness of implementation • Model verification • Check for representativeness of assumptions • Model validation • Handle initial observations • Decide how long to run the simulation

  3. Model Verification Techniques • Verification is similar to debugging • Programmer’s responsibility • Validation • Modeling person’s responsibility

  4. Top-Down Modular Design • Simulation models are large computer programs • Software engineering techniques apply • Modularity • Well-defined interfaces for pieces to coordinate • Top-down design • Hierarchical structure

  5. Antibugging • Sanity checks • Probabilities of events should add up to 1 • No simulated entities should disappear • Packets sent = packets received + packets lost

  6. Structured Walk-Through • Explaining the code to another person • Many bugs are discovered by reading the code carefully

  7. Deterministic Models • Hard to verify simulation against random inputs • Should debug by specifying constant or deterministic distributions

  8. Run Simplified Cases • Use only one packet, one source, one intermediary node • Can compare analyzed and simulated results

  9. Trace • Time-ordered list of events • With associated variables • Should have levels of details • In terms of occurred events, procedure called, or variable updates • Properly indented to show levels • Should allow the traces to be turned on and off

  10. On-line Graphic Displays • Important when viewing a large amount of data • Verifying a CPU scheduler preempt processes according to priorities and time budgets

  11. Continuity Test • Run the simulation with slightly different values of input parameters • Δ change in input should lead to Δ change in output • If not, possibly bugs

  12. Degeneracy Test • Test for extreme simulation input and configuration parameters • Routers with zero service time • Idle and peak load • Also unusual combinations • Single CPU without disk

  13. Consistency Tests • Check results for input parameters with similar effects • Two sources with arrival rate of 100 packets per second • Four sources with arrival rate of 50 packets per second • If dissimilar, possibly bugs

  14. Seed Independence • Different random seeds should yield statistically similar results

  15. Model Validation Techniques • Should validate • Assumptions • Input values and distributions • Output values and conclusions • Against • Expert intuition • Real system measurements • Theoretical results

  16. Model Validation Techniques • May not be possible to check all nine possibilities • Real system not available • May not be possible at all • The reason why the simulation was built • As the last resort • E.g., economic model

  17. Expert Intuition • Should validate assumptions, input, and output separately and as early as possible Why would increased packet loss lead to better throughput? Throughput % packet loss

  18. Real-System Measurements • Most reliable way for validation • Often not feasible • System may not exist • Too expensive to measure • Apply statistical techniques to compare model and measured data • Use multiple traces under different environments

  19. Theoretical Results • Can apply queueing models • If too complex • Can validate only the common scenarios • Can validate a small subset of simulation parameters • E.g., compare analytical equations with CPU simulation models with one and two cores • Use validated simulation to simulate many cores

  20. Transient Removal Transient Removal • In most cases, we care only about steady-state performance • We need to perform transient removal to remove initial data from analysis • Difficulty • Find out where transient state ends

  21. Long Runs • Just run the simulation for a long time • Waste resources • Not sure if it’s long enough

  22. Proper Initialization • Start the simulation in a state close to steady state • Pre-populate requests in various queues • Pre-load memory cache content • Pre-fragment storage (e.g., flash) • Reduce the length of transient periods

  23. Truncation • Assume steady state variance < transient state variance • Algorithm • Measure variability in terms of range • Remove the first L observations, one at a time • Until the (L + 1)th observation is neither min nor max or the remaining observations

  24. Truncation L

  25. Initial Data Deletion • m replications • n data points for each replication • Xij = jth data point in ith replication

  26. Initial Data Deletion • Step 1: average across replications

  27. Initial Data Deletion • Step 2: compute grand mean µ • Step 3: compute µL = average last n – L values,  L

  28. Initial Data Deletion • Step 4: offset µL by µ and normalize the result to µ by computing relative change ΔµL= (µL - µ)/µ Transient interval

  29. Moving Average of Independent Replications • Similar to initial data deletion • Requires computing the mean over a sliding time window

  30. Moving Average of Independent Replications • m replications • n data points for each replication • Xij = jth data point in ith replication

  31. Moving Average of Independent Replications • Step 1: average across replications

  32. Moving Average of Independent Replications • Step 2: pick a k, say 1; average (j – k)th data point to (j + k)th data point,  j; increase k as necessary Transient interval

  33. Batch Means • Used for very long simulations • Divide N data points into m batches of n data points each • Step 1: pick n, say 1; compute the mean for each batch • Step 2: compute the mean of means • Step 3: compute the variance of means • Step 4: n++, go to Step 1

  34. Batch Means • Rationale: as n approaches the transient size, the variance peaks • Does not work well with few data points Transient interval

  35. Terminating Simulations • Terminating simulations: for systems that never reach a steady state • Network traffic consists of the transfer of small files • Transferring large files to reach steady state is not useful • System behavior changes with time • Cyclic behavior • Less need for transient removal

  36. Terminating Conditions • Increase the number of multimedia streams • Until missing 10 deadlines within a time window • May not terminate • One deadline miss may stretch across time windows • Fix: until missing 3 deadlines…

  37. Final Conditions • Handling the end of simulations • Might need to exclude some final data points • E.g., Mean service time = total service time/n completed jobs

  38. Stopping Criteria: Variance Estimation • If the simulation run is too short • Results highly variable • If too long • Wasting resources • Only need to run until the confidence interval is narrow enough • Since confidence interval is a function of variance, how do we estimate variance?

  39. Independent Replications • m runs with different seed values • Each run has n + n0 data points • First n0 data points discarded due to transient phase • Step 1: compute mean for each replication based on n data points • Step 2: compute µ, mean of means • Step 3: compute 2, variance of means

  40. Independent Replications • Confidence interval: µ ± z1-α/22 • Use t[1-α/2; m – 1], for m < 30 • This method needs to discard mn0 data points • A good idea to keep m small • Increase n to get narrower confidence

  41. Batch Means • Given a long run of N + n0 data points • First n0 data points discarded due to transient phase • N data points are divided into m batches of n data points

  42. Batch Means • Start with n = 1 • Step 1: compute the mean for each batch • Step 2: compute µ, mean of means • Step 3: compute 2, variance of means • Confidence interval: µ ± z1-α/22 • Use t[1-α/2; m – 1], for m < 30

  43. Batch Means • Compared to independent replications • Only need to discard n0 data points • Problem with batch means • Autocorrelation if the batch size n is small • Can use the mean of ith batch to guess the mean of (i + 1)th batch • Need to find a batch size n

  44. Batch Means • Plot batch size n vs. variance of means • Plot batch size n vs. autocovariance • Cov(batch_meani, batch_meani+1),  i

  45. Method of Regeneration • Regeneration • Measured effects for a computational cycle are independent of the previous cycle Regeneration points Regeneration cycle

  46. Method of Regeneration • m regeneration cycles with ni data points each • Step 1: compute yi, sum for each cycle • Step 2: compute grand mean, µ • Step 3: compute the difference between expected and observed sums wi = yi - niµ • Step 4: compute 2 based on wi

  47. Method of Regeneration • Step 5: compute average cycle length, c • Confidence interval: µ ± z1-α/22/(cm) • Use t[1-α/2; m – 1], for m < 30

  48. Method of Regeneration • Advantages • Does not require removing transient data points • Disadvantages • Can be hard to find regeneration points

  49. White Slide

More Related