1 / 14

Real-Time Queueing Theory

Real-Time Queueing Theory. Presented by: John Lehoczky Carnegie Mellon SAMSI Workshop Congestion Control and Heavy Traffic. Background. Real-time systems refer to computer and communication systems in which the applications/tasks/jobs/packets have explicit timing requirements (deadlines).

lindsey
Télécharger la présentation

Real-Time Queueing Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Real-Time Queueing Theory Presented by: John Lehoczky Carnegie Mellon SAMSI Workshop Congestion Control and Heavy Traffic

  2. Background • Real-time systems refer to computer and communication systems in which the applications/tasks/jobs/packets have explicit timing requirements (deadlines). • These arise in (e.g.): • voice and video transmission (e.g. teleconferencing) • control systems (e.g. automotive) • avionics systems

  3. Goals • For a given workload model we want: • to predict the fraction of the workload that will meet its deadline (end-to-end in the network case), • to design workload scheduling and control policies that will ensure service guarantees (e.g. a suitably small fraction miss their deadline), • to investigate network design issues, e.g.: • Number of priority bits needed • Cost/benefit from flow tables • Cost/benefit from keeping lead-time information

  4. Model • Multiple streams in a multi-node acyclic network. • Independent streams of jobs. • Jobs in a stream form a renewal process and have independent computational requirements at each node • For a given stream, each job has an i.i.d. deadline (different for different streams) • Node processing is EDF (Q-EDF), FIFO, PS, Fixed Priority.

  5. Analysis: 1 • In addition to tracking the workload at each node, we need to track the lead-time (= time until deadline elapses) for each task. • The dimensionality becomes unbounded, and exact analysis is impossible. • We resort to a heavy traffic analysis. This is appropriate for real-time problems. If we can analyze and control under heavy traffic, moderate traffic will be better.

  6. Analysis: 2 • Heavy traffic analysis (traffic intensity on each node converges to 1) • One node – workload converges to Brownian motion. Multiple nodes, workload may converge to RBM. • Conditional on the workload, lead-time profile converges to a deterministic form depending upon • stream deadline distributions, • scheduling policy • traffic intensity • Combining the lead-time profile with the equilibrium distribution of the workload process, we can determine the lateness fraction for each flow.

  7. Processor Sharing – Exp. Deadlines

  8. Processor Sharing – Exp. Deadlines

  9. Processor Sharing – Exp. Deadlines

  10. Processor Sharing – Exp. Deadlines

  11. Processor Sharing–Const. Deadlines

  12. Processor Sharing-Const. Deadlines

  13. Processor Sharing-Const. Deadlines

  14. EDF Miss Rate Prediction EDF Deadline Miss Rate: =0.95 EDF scheduling Uniform(10,x) deadlines Internet Exponential : computed from the first two moments of task inter-arrival times and service times. : Mean Deadline Uniform

More Related