1 / 19

Classification of Discrete Event Simulation Models and Output Data: Creating a Sufficient Model Set.

Classification of Discrete Event Simulation Models and Output Data: Creating a Sufficient Model Set. . Katy Hoad (kathryn.hoad@wbs.ac.uk) Stewart Robinson, Ruth Davies, Mark Elder www.wbs.ac.uk/go/autosimoa Funded by EPSRC and SIMUL8 Corporation. AIM:.

vila
Télécharger la présentation

Classification of Discrete Event Simulation Models and Output Data: Creating a Sufficient Model Set.

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Classification of Discrete Event Simulation Models and Output Data:Creating a Sufficient Model Set. Katy Hoad (kathryn.hoad@wbs.ac.uk) Stewart Robinson, Ruth Davies, Mark Elder www.wbs.ac.uk/go/autosimoa Funded by EPSRC and SIMUL8 Corporation

  2. AIM: Provide a representative and sufficient set of models / data output for use in discrete event simulation research.

  3. MODEL CLASSIFICATION Creating A Standard Set of Models/Outputs Outline: • Motivation • Identification of model/output characteristics • Creation of a classification system

  4. Simulation model Output data Analyser Warm-up analysis Obtain more output data Use replications or long-run? Replications analysis Run-length analysis Recommendation possible? Recommend- ation Motivation • Want to create an automated Analyser to advise user on: • Warm-up length • Run-length • Number of replications

  5. Motivation • Needed to test output analysis methods to find the most effective methods and… • …test created algorithms for effectiveness and robustness. • Required a set of models / output data that sufficiently covered the different types of possible models/output. • Could not find a general set in the public domain.

  6. Identification of model/output characteristics How do you define a sufficient and representative set of models/output? AIM To define a set of characteristics that classify/describe a model and its output. • Searched the literature. • Collected and studied ‘real’ models/output.

  7. Auto Correlation Non-terminating In/out of control Terminating Group B Normality Cycling/Seasonality Steady state Transient

  8. 2 main categories or groups: • Transient (including out-of-control trend) • Steady-state (including steady-state cycle) 9 other characteristics of models / output were chosen to categorize the models / output within these two main groups.

  9. Output data characteristics • Model characteristics • Deterministic or Stochastic (random) • Significant pre-determined model changes (by time) • Dynamic internal changes i.e. ‘feed-back’ • Empty-to-empty pattern • Initial transient (warm-up) • Out of control trend ρ≥1 • Cycle • Auto-correlation • Statistical distribution

  10. Looked at over 50real models- defined as discrete event simulation models of real existing / future systems: For example: • Justification of selection of model output: • Picked most likely output result for each model, using already programmed results collection when feasible.

  11. Further Analysis • Each real model was statistically analysed as follows: • Steady State: Subtract mean from output data. • Test residuals for Auto-correlation and Normality. • 2. Steady State Cycle: Run model for many cycles. • Take mean of each cycle to create a new time series. • Subtract mean from this new output data. • Test residuals for Auto-correlation and Normality. • 3.Transient: Test for Auto-correlation on output data. • Run many replications (1000) • Take mean of each replication to create new (non auto-correlated) data set. • Test for what type of statistical distribution best fits this new data set. • Out-Of-Control: Plot data

  12. Analysis Results Steady State data: • Autocorrelation: AR(1), AR(2), some AR(3+), some ARMA(n,n) & some with no auto-correlation. • Distributions: Normal and non-normal. Transient data: • AR(1), AR(2), some AR(3+), some ARMA(n,n) & some with no autocorrelation. • Distributions found to be a ‘good’ fit to the various transient data output: Normal, Beta, Pearson5, LogNormal, Weibull, Gamma, Pearson6, Erlang, Chi squared, Bi-modal distribution

  13. Classification Tables • MODEL SUMMARY_Steady State.xls • MODEL SUMMARY_Transient.xls • AIM: Collect ‘real’ models to cover range of classifications of models. (On-going process) Create artificial models to cover range of classifications of output data.

  14. Sample of Artificial Models from literature: steady state outputs with or without a warm-up period. • Cash et al 1992: AR(1); M/M/1; Markov Chain. • Robinson 2007: AR(1); M/M/1. • Goldsman et al. 1994: AR(1); M/M/1. • White, Cobb & Spratt 2000: AR(2). • Ockerman & Goldsman: Random Walk; AR(1); MA(1). 1997 • Kelton & Law 1983: M/M/1 (FIFO); M/M/1 (LIFO); M/M/1(SIRO); M/M/1 (initialized with 10 customers); E4/M/1; M/H2/1; M/M/2; M/M/4; M/M/1/M/1/M/1. • Hsieh et al 2004: M/M/1/199; M/G/1/199; M/M/1/19; Number-in-stock process single item inventory management system.

  15. 3 main methods for creating artificial models / output data sets: 1. Create simplesimulation models where theoretical value of some output / response is known. • E.g. Model: M/M/1. Output: mean waiting time. 2. Create simple simulation models where the value of some output / response is estimated but model characteristics can be controlled. • E.g. Model: Single item inventory management system. Output: Number-in-stock. • 3. Create data sets from known equations, which closely resemble real model output, with known value for some specific output / response. • E.g. AR(1) with Normal(0,1) errors. Output: mean

  16. Our Project: Replications and Warm-up Method Testing • Replication MethodTesting • Data sets of replicated mean values from transient output – left and right skewed, Normal and Bi-modal. • Real models • Warm-up Method Testing • Steady state functions: AR(1), AR(2), AR(4), MA(2), ARMA(5,5), no auto-correlation. • Initialisation Bias functions: Severity, Length, Shape. • Real models

  17. SUMMARY • Produced a classification of model and output data types for the purpose of aiding research into simulation output analysis. • Currently using artificial models that broadly cover each output type in the classification tables in our research into output analysis methods. www.wbs.ac.uk/go/autosimoa

  18. DISCUSSION: • YOUR COMMENTS APPRECIATED • Using our chosen classification criteria, we have classified a complete set of possible models / output: But are these criteria sufficient? • Main model/output types missing from our collection: • Transient with warm-up. • Deterministic transient. • Cycle with warm-up • Are these missing model criteria feasible? ?

  19. Thank you for listening. ACKNOWLEDGMENTSThis work is part of the Automating Simulation Output Analysis (AutoSimOA) project that is funded by the UK (EPSRC) Engineering and Physical Sciences Research Council (EP/D033640/1). The work is being carried out in collaboration with SIMUL8 Corporation, who are also providing sponsorship for the project. Stewart Robinson, Katy Hoad, Ruth Davies INFORMS November 2007 www.wbs.ac.uk/go/autosimoa

More Related