1 / 46

Chapter 4

IEG4020 Telecommunication Switching and Network Systems. Chapter 4. Switch Performance Analysis and Design Improvements. P = Pr[ carry a packet ]. 1. 2. 3. 2. Pr[ carry a packet ] =  0. Internally Nonblocking Switch: Loss System. for large N. For  0 = 1 , p  0.632.

edmund
Télécharger la présentation

Chapter 4

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. IEG4020 Telecommunication Switching and Network Systems Chapter 4 Switch Performance Analysis and Design Improvements

  2. P = Pr[ carry a packet ] 1 2 3 2 Pr[ carry a packet ] =0 Internally Nonblocking Switch:Loss System for large N For 0=1, p 0.632

  3. Fig. 4.1. Illustration of head-of-line (HOL) blocking. Winning packet Input Queues Outputs Losing packet 3 1 1 2 1 Internally Nonblocking Switch Cannot access output 2 because it is blocked by the first packet 2 4 3 3 4

  4. Fig. 4.2. An input-buffered switch with the fictitious queues used for analysis. Output 1 Fictitious Output Queues formed by HOL packets (3,2) (1,2) Output 2 (2,3) Output 3 (4,4) Output 4 (input, output) Outputs (1,1) (1,2) 1 (2,1) (2,3) Internally Nonblocking Switch 2 (3,2) (3,2) 3 (4,1) (4,4) 4

  5. Throughout of Input-Buffered Switch • Consider a fictitious queue associated with a particular output i = # packets at start of time slot m. = # packets arriving at start of time slot m. = # packets remaining at end of time slot m = # inputs that won contention in time slot m

  6. e.g. Fictitious Queue i 1 i i 1 i i 1 2 2 2 3 i time slot m time slot m-1

  7. The empty probability of any output link is … …

  8. is Poisson and independent of as N 

  9. under saturation max. throughput

  10. Meaning of Saturation Throughput 0 =    = throughput Input Queue • Let be the saturation throughput of the input-buffered switch with FIFO discipline. • When the offered load , the throughput , the system is stable. • When the offered load , the throughput , the system is saturated. In this case, with probability 1 the buffer will overflow.

  11. How about small N? Table 4.1. Maximum throughput for input-buffered switches of different sizes.

  12. Fig. 4.3. Queuing scenario for the delay analysis of the input-buffered switch. Fictitious Queues Output 1 Input Queue 1/N 1/N N 2 Output 2 HOL 1/N Time spent in HOL are independent for successive packets when N is large Output N Service times at different fictitious queues are independent

  13. Fig. 4.4. The busy periods and interpretations for delay analysis of an input queue. U(t)  X3 X0   X1  X2  X0  t Idle period Y Busy period Busy period Arrivals here are considered as arrivals in intervals i-2 Arrivals here are considered as arrivals in intervals i-1 Xi-1 Xi

  14. Fig. 4.5. Illustration of the meanings of random variables used in the delay analysis of an input queue. mi =2 prior arrivals Arrival of the packet of focus. One simultaneous arrival to be served before the packet; L=1. Departure of packet of focus. (1) (1) (2) Xi Xi+1 Ri W -- Packet arrival in interval i. -- packet departure in interval i+1. (n) -- number of arrivals

  15. Fig. 4.6. Different contention-resolution policies have different waiting time versus load relationships, but a common maximum load at which waiting time goes to infinity. 0

  16. Fig. 4.7. A unit line for determining the order of service among simultaneously arriving packets. Simultaneous arrivals are randomly placed on a unit line to determine the order of service t 0 1 Packet whose waiting time is being analyzed is placed at t

  17.  = load 1 1 1 1 as N  Poisson Distribution. • Switch with Speedup factor of N. • Arriving packets reach the targeted output ”immediately”. • = # arriving packets at start of time slot m Queuing Analysis in Output-buffered Switch

  18. Average Delay Delay in Output-Buffered Switch • not necessarily  1 • Little’s’ Law

  19. 1 2 3 4  * 0.59 0.70 0.76 0.80 1 • What if FIFO constraint removed ? Look-ahead scheme : look at first packets at each queue. cost = overhead of one round of contention Actual throughout =

  20. Fig. 4.8. The speedup principle. Output queues are needed because packets may arrive too many at a time for immediate transmission at outputs To avoid packet loss at inputs, input queues are needed if S < N Switch with speedup factor = S Each switch cycle = 1/S time slot. Up to S packets may leave a given input or reach a given output in a time slot.

  21. Packet 2 will be directed to switch 2 if packet 1 is cleared in switch 1. Otherwise, packet 1 will be directed to switch 2. Packet 2 Packet 1 2 1 Packets are directed to switch 1 in the first half-time slot and to switch 2 in the second half-time slot. Fig. 4.9. (a)

  22. Fig. 4.9. Methods for achieving speedup effect without speeding up switch operation: (a) using multiple switches; (b) using packet-slicing concept. Output address Packet cut into half Packet assembled 2 1 (b)

  23. 1 R In each time slot, at most 1 packet from each input and up to R packets to each output are cleared. N x NR R N R 1 2 3 4 * 0.59 0.89 0.98  1 Channel Grouping Fig. 4.10. Channel Grouping

  24. e.g. R=S=2 3 1 1 1 2 2 3 2 4 All packets cleared if speedup; packet 3 not cleared if channel grouping

  25. Fig. 4.11. A Batcher-R-banyan network that implements the channel- grouping principle. 0 Banyan Network 0 Output 0 MUX 0 R-2 0 N-1 Batcher Network Banyan Network R-1 N-1 MUX N-1 N-1 Output i(i=0, …, N-1)connected to inputi/Rof banyan network imod R.

  26. Truncated Banyan Network R-1 R 0 1 0 0 R 1 Truncated Banyan Network R-1 N R 0 N Fig. 4.12. (a)

  27. Fig. 4.12. (a) The expansion banyan network; (b) Labeling of the truncated banyan network and its output groups. Relative output 00…0 R Truncated Banyan Network b1b2…bR Inputs connected to outputs b1 ...bR of all expanders Relative output bR+1…bn R Relative output 11 …1 R (b)

  28. Fig. 4.13. (a) Multiplexer and output queue at an output of a channel-grouped switch. To accommodate simultaneous packet arrivals, the multiplexer must work R times faster than link rate. (b) An implementation of a logical FIFO queue such that the multiplexer only have to work at same speed as link rate. Rx1 switch working at R = 4 times the links rate 1 mux R Output Queue (a) Rx1 switch working at same speed as link rate R 1 Shifting Concentrator mux R Packets are loaded into queues in round-robin fashion Packets are read out from queues in round-robin fashion (b)

  29. Max Throughput of Channel-grouped switch • zis are roots of numerator • can show the roots of denominator zR-A(z) are 1, z1, z2  …, zR-1  where |zi| < 1 • zi = zii, Otherwise C(zi) infinity which is not possible

  30. Knockout Principle If R is large, (e.g. R = 8), might as well not queue packets at inputs. Simply drop them. Loss probability is small

  31. Loss Probability in Knockout Switch

  32. We can show that the loss probability is bounded by • By employing the following inequalities (1) (2)

  33. Proof of

  34. Proof of By the Taylor series expansion

  35. (by letting i = k - R ) by applying (1) Mean value of binomial random variable ρ/ N

  36. By applying (2) (independent of input loads) (independent of ports)

  37. Fig 4.14. A Batcher-banyan knockout switch. 0 0 NxN banyan (1) 0 0 MUX 1 1 1 NxN banyan (2) Reverse-banyan Concentrator 1 R ? R MUX = Batcher Network R+1 ? = NxN banyan (R) N-1 MUX N-1 ? = Output Address Let packet 2 go through if and only if ab Packet 1 a ? = Packet 2 b

  38. Fig. 4.15. Running adder network. 0 RAN 1 2 0 RAN 1 Running Adder Network (RAN) produces concentrator - output address

  39. Fig. 4.16. (a) A central controller for computing the assignments of packets to concentrator outputs; (b) a running-adder address generator that computes the assignments in a parallel and distributed manner. Central Controller Reverse banyan Concentrator (a) Address assigned=Sum of all activity bits above 0 0 0 + + + 0 0 + + + 0 Running Sum + + + Packet A 0 Info a a + + + + Info b a+b + + + Info b b Packet B + + + Packet B + + + (b) + + +

  40. Fig. 4.17. A knockout switch based on broadcast buses and knockout concentrators. 1 2 Input N 1 2 N 1 2 N Knockout Concentrator 1 2 R Shifter Output Output1 Output N

  41. Input Packet filters Knockout element Delay element Losing packets D D D D D D D D D D D D D D 1 4 2 3 Output Number of switch elements = (N-1)+(N-2)+ … (N-R)= NR-R(R+1)/2  NR

  42. Fig. 4.18. An 8x4 knockout concentrator and operation of its component 2x2 switch elements. Packet a Packet b Packet a Packet b Inactive Inactive

  43. Replication Principle Single Banyan Network : Random Routing Parallel Banyan : For a fixed Ploss requirement, so, order of complexity = Nlog2N with a large constant.

  44. Fig. 4.19. A parallel-banyan network. 1 1 1st Banyan Network 2 2 Kth Banyan Network N N Random router or broadcaster Statistical multiplexer

  45. Fig. 4.20. An 8x8 banyan network with dilation degree 2. 000 000 001 001 010 010 011 011 100 100 101 101 110 110 111 111

  46. Fig. 4.21. An implementation of 2dx2d switch element with order of complexity dlogd. 2 dxd concentrator 2 dxd concentrator Complexity~ O(dlogd) ~END~

More Related