1 / 30

Publisher Relocation Algorithms for Minimizing Delivery Delay and Message Load

MIDDLEWARE SYSTEMS. RESEARCH GROUP. Publisher Relocation Algorithms for Minimizing Delivery Delay and Message Load. Alex Cheung and Hans-Arno Jacobsen August, 14 th 2009. Agenda. Problem statement and goal Related approaches How GRAPE and POP works? Experiment results

nitsa
Télécharger la présentation

Publisher Relocation Algorithms for Minimizing Delivery Delay and Message Load

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MIDDLEWARE SYSTEMS RESEARCH GROUP Publisher Relocation Algorithms for Minimizing Delivery Delay and Message Load Alex Cheung and Hans-Arno Jacobsen August, 14th 2009

  2. Agenda • Problem statement and goal • Related approaches • How GRAPE and POP works? • Experiment results • Conclusions and Future Work

  3. Problem P • Publishers can join anywhere in the network • Closest broker • Impact: • High delivery delay • High system utilization • Matching • Bandwidth • Subscription Storage Pure forwarders S S

  4. Goal P • Adaptively move publisher to the area of highest-rated subscribers or highest number of publication deliveries • Key properties of solution: • Dynamic • Transparent • Scalable • Robust S S

  5. Existing Approaches • Filter-basedPub/Sub: • R.Baldoni et al. Efficient publish/subscribe through a self-organizing broker overlay and its application to SIENA. The Computer Journal, 2007. • Migliavacca et al. Adapting Publish-Subscribe Routing to Traffic Demands. DEBS 2007. • Multicast-based Pub/Sub: • Such as Riabov’s subscription clustering algorithms (ICDCS’02 and ‘03), SUB-2-SUB (one subscription per peer), TERA (topic-based) • Assign similar subscriptions to one or more cluster of servers • One-time-match at the dispatcher • Suitable for static workloads • May get false-positive publication delivery • Architecture is fundamentally different than filter-based approaches

  6. Terminology upstream downstream B1 B2 B3 B4 B5 P Reference broker Publication flow

  7. GRAPE - Intro • Greedy Relocation Algorithm for Publishers of Events • Goal: Move publishers to area with highest-rated subscribers or highest publication deliveries based on GRAPE’s configuration.

  8. GRAPE’s Configuration • The configuration tells GRAPE what aspect of system performance to improve: • Prioritize on minimizing average end-to-end delivery delay or total system message rate (a.k.a. system load) • Weight on prioritization falls on a scale between 0% (weakest) and 100% (full). • Example: Prioritize on minimizing load at 100% (load100)

  9. Minimize Delivery Delay or Load? P 100% Load 0% 0% Delay 100% 1 msg/s 4 msg/s [class,=,`STOCK’], [symbol,=,`GOOG’], [volume,>,1000000] [class,=,`STOCK’], [symbol,=,`GOOG’], [volume,>,0] S S S S S S [class,`STOCK’], [symbol,`GOOG’], [volume,9900000]

  10. GRAPE’s 3 Phases • Operation of GRAPE is divided into 3 phases: • Phase 1: • Discover location of publication deliveries by tracing live publication messages in trace sessions • Retrieve trace and broker performance information • Phase 2: In a centralized manner, pinpoint the broker that minimizes the average delivery delay or system load • Phase 3: Migrate the publisher to the broker decided in phase 2 • Transparently with minimal routing table update and message overhead

  11. Phase 1 – Logging Publication History • Each broker records, per publisher, the publications delivered to local subscribers • Gthreshold publications are traced per trace session • Each trace session is identified by the message ID of first traced publication message of that session Publications received from start of trace session B34-M212 B34-M213 B34-M220 B34-M212 Trace session ID B34-M215 B34-M222 Start of bit vector B34-M225 B34-M216 0 1 0 1 1 1 0 0 1 0 1 0 0 1 1 0 0 B34-M226 B34-M217 GRAPE’s data structure representing local delivery pattern. Requires each publication to store the trace session ID

  12. Phase 1 – Trace Data and Broker Performance Retrieval … at the end of a trace session Reply B8 1x 1x S S 9x B8 S Reply B8, B7, B6, B5 Reply B8, B7, B6 B1 B5 B6 5x B7 S P Reply B7

  13. Phase 1 – Contents of Trace Information • Broker ID • Neighbor ID(s) • Bit vector (for estimating total system message rate) • Total number of local deliveries (for estimating end-to-end delivery delay) • Input queuing delay • Average matching delay • Output queuing delays to neighbor(s) and binding(s) • GRAPE adds 1 reply message per trace session.

  14. Phase 2 – Broker Selection • Estimate the average end-to-end delivery delay • Local delivery counts, and queuing and matching delays • Publisher ping times to the downstream brokers • Estimate the total broker message rate • Bit vectors

  15. Phase 2 – Estimating Average End-to-End Delivery Delay 20 ms 5 ms 45 ms 25 ms 40 ms 35 ms Input Q: Matching: Output Q (RMI): Output Q (B5): Output Q (B7): Output Q (B8): Subscriber at B1: 10+(30+20+100) ×1 = 160 ms B7 9 S Subscribers at B2: 10+[(30+20+50)+(20+5+45)] ×2 = 350 ms Input Q: Matching: Output Q (RMI): Output Q (B6): 30 ms 10 ms 70 ms 30 ms 1 2 S S Subscribers at B7: 10+ [(30+20+50)+(20+5+40)+ (30+10+70)] ×9 = 2,485 ms B1 B6 35 ms 15 ms 75 ms 35 ms Input Q: Matching: Output Q (RMI): Output Q (B6): Ping time: Subscribers at B8: 10+[(30+20+50)+(20+5+35)+ (35+15+75)] ×5 = 1,435 ms 10 ms P Average end-to-end delivery delay: (150+340+2475+1425) ÷ 17 = 268 ms 30 ms 20 ms 100 ms 50ms Input Q: Matching: Output Q (RMI): Output Q (B5): B8 5 S

  16. Phase 2 – Estimating Total Broker Message Rate Bit vector capturing publication deliveries to local subscribers 0 0 1 0 0 B7 9 S 1 0 0 0 0 0 0 0 0 1 0 0 1 0 0 Message rate through a broker is calculated by using the OR-bit operator to aggregate the bit vectors of all downstream brokers 1 2 S S B1 B6 0 1 1 1 1 P B8 5 S 1 1 1 1 1 0 1 1 1 1 0 1 1 1 1

  17. Phase 2 – Minimizing Delivery Delay with Weight P% • Get ping times from publisher • Calculate the average delivery delay if the publisher is positioned at any of the downstream brokers • Normalize, sort, and drop candidates with average delivery delays greater than 1-P (0 ≤ P ≤ 1). • Calculate the total broker message rate if the publisher is positioned at any of the remaining candidate brokers • Select the candidate that yields the lowest total system message rate.

  18. Phase 3 – Publisher Migration Protocol Requirements: • Transparent to the end-user publisher • Minimize network and computational overhead • No additional storage overhead

  19. Phase 3 - Example Update last hop of P to B6 Remove all S with B6 as last hop 2x B3 S 1x 1x S S 4x 9x B2 B8 S S (1) Update last hop of P to B6-x Update last hop of P to B6 Remove all S with B5 as last hop Forward (all) matching S to B5 B1 B5 B6 3x 5x B4 B7 S S P DONE How to tell when all subs are processed by B6 before P can publish again?

  20. POP - Intro • Publisher Optimistic Placement • Goal: Move publishers to the area with highest publication delivery or concentration of matching subscribers

  21. POP’s Methodology Overview • 3 phase algorithm: • Phase 1: Discover location of publication deliveries by probabilistically tracing live publication messages • Ongoing, efficiently with minimal network, computational, and storage overhead • Phase 2: In a decentralized fashion, pinpoint the broker closest to the set of matching subscribers using trace data from phase 1 • Phase 3: Migrate the publisher to the broker decided in phase 2 • Same as GRAPE’s Phase 3

  22. Multiple publication traces are aggregated by : Si = Snew + (1 - α) Si-1 Phase 1 – Aggregated Replies Publisher Profile Table 2x B3 S 1x 1x S S 4x 9x Reply 9 B2 B8 S S Reply 15 Reply 15 B1 B5 B6 3x 5x B4 B7 S S Reply 5 P In terms of message overhead, POP introduces 1 reply message per traced publication

  23. Phase 2 – Decentralized Broker Selection Algorithm • Phase 2 starts when Pthresholdpublications are traced • Goal: Pinpoint the broker that is closest to highest concentration of matching subscribers • Using trace information from only a subset of brokers • The Next Best Broker condition: • The next best neighboring broker is the one whose number of downstream subscribers is greater than the sum of all other neighbors' downstream subscribers plus the local broker's subscribers.

  24. Phase 2 – Example 2x B3 S AdvId: P DestId: null Broker List: B1, B5, B6 10 1x 1x B6 S S 4x 9x B2 B8 S S B1 B5 B6 3x 5x B4 B7 S S P

  25. Experiment Setup • Experiments on both PlanetLab and a cluster testbed • PlanetLab: • 63 brokers • 1 broker per box • 20 publishers with • publication rate of 10 – • 40 msg/min • 80 subscribers per • publisher • 1600 subscribers in total • Pthreshold of 50 • Gthreshold of 50 • Cluster testbed: • 127 brokers • Up to 7 brokers per box • 30 publishers with • publication rate of 30 – • 300 msg/min • 200 subscribers per • publisher • 6000 subscribers in total • Pthreshold of 100 • Gthreshold of 100

  26. Average Input Utilization Ratio VS Subscriber Distribution Graph

  27. Average Delivery Delay VS Subscriber Distribution Graph

  28. Results Summary • Under random workload • No significant performance differences between POP and GRAPE • Prioritization metric and weight has almost no impact on GRAPE’s performance • Increasing the number of publication samples on POP • Increases the response time • Increases the amount of message overhead • Increases the average broker message rate • GRAPE reduces the input util ratio by up to 68%, average message rate by 84%, average delivery delay by 68%, and message overhead relative to POP by 91%.

  29. Conclusions and Future Work • POP and GRAPE moves publishers to highest-rated or highest number of matching subscribers to: • Reduce load in the system, and/or • Scalability • Reduce average delivery delay on publication messages • Performance • Subscriber relocation algorithm that works in concert with GRAPE

  30. Questions and Notes

More Related