160 likes | 288 Vues
This paper explores memory management algorithms for distributed shared memory (DSM) switches, focusing on the gap between theoretical and practical implementations. It demonstrates that while 2N memories are necessary, 3N-1 is sufficient for FIFO scenarios. The study presents a counter-example showing that 2.25N memories are essential, and a heuristic utilizing 2.5N memories is proposed. The results include simulations under Bernoulli traffic, suggesting that minimizing memory intersection can optimize performance. The findings indicate a significant gap between theory and practice, highlighting the need for further research.
E N D
Memory Management Algorithms for DSM Switches Huan Liu, Damon Mosk-Aoyama
Distributed Shared Memory Switch • What is known (for FIFO) • 2N is necessary • 3N-1 is sufficient • Questions: • Can we close the gap? • Possible to have an implementable algorithm?
Our main results • On the gap • Counter example to show 2.25N is necessary • A heuristic that uses 2.5N memories • On practical algorithms • Simulation results on simple algorithms under Bernoulli i.i.d. traffic
1 1 1 … 2 2 2 ? 3 3 ? 4 4 The counter example • Consider 4x4 switch • Can generalize to arbitrary N 5 5 Output 1 6 … 6 7 7 8 8 Output 4
If we have N/4 more 1 1 1 6 6 Output 1 … 5 2 2 7 … 7 3 5 3 8 8 2 4 4 9 9 Output 4
1 1 1 1 2 2 2 2 3 3 3 3 7 6 5 4 Observation • Greedily minimizing the number of memories used can lead to trouble • Need to reuse memories later as time slot fills up 1 2 3 4
5 3 1 1 6 4 2 2 Heuristics with 2.5N memories • Minimize intersection between adjacent time slots • Minimize intersection between neighboring pairs • After N/2 cells arrived in a time slot, reuse memories already assigned to the adjacent time slot. • Simulation has been running for 100M+ cycles with no problem Minimize intersection 1 2 1 6 2 4 3 4
Random algorithm • Assign memories to arriving cells randomly • Drop if another cell using the memory is • departing now • departing in the future in the same time slot Si …… S2 S1
Upper bound on Drop Rate • Suppose there are memories. The drop probability is • The drop rate can now be computed as: • Use Si distribution from M/M/1
Distributed random algorithm • Each packet makes independent decision • Pick a random memory that is NOT • departing now • departing in the same time slot in the future • If two arriving packets pick the same memory, we drop one
Centralized random algorithm • Assign each packet in turn • Randomly pick a memory that is NOT • departing now • departing in the same time slot in the future • assigned for other packets arriving at the same time
Conclusion • Still gap more work • Better counter example? • Prove 2.5N is sufficient • Also gap between theory and practical algorithm