1 / 47

Tuning and Scalability for Sonic

Tuning and Scalability for Sonic. Analyzing, testing and tuning JMS/ESB performance. Jiri De Jagere. Solution Engineer EMEA. D I S C L A I M E R. D I S C L A I M E R. Setting Performance Expectations.

PamelaLan
Télécharger la présentation

Tuning and Scalability for Sonic

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tuning and Scalability for Sonic Analyzing, testing and tuning JMS/ESB performance Jiri De Jagere Solution Engineer EMEA

  2. D I S C L A I M E R D I S C L A I M E R Setting Performance Expectations • System performance is highly dependent on machine, application and product version. Performance levels described here may or may not be achievable in a specific deployment. • Tuning techniques often interact with specific features, operating environment characteristics and load factors. You should test every option carefully and make your own decision regarding the relative advantages of each approach. Session ID: Session Title

  3. Agenda Analyzing, testing and tuning JMS/ESB performance • Methodology • Analysis • Testing • Tuning Session ID: Session Title

  4. Expert Tip: Limit scope to those test components that are critical to performance and under your control Performance Engineering Terms “Platform” “System Metric” “Load” = “Sessions” * “Delivery Rate” “System Under Test” R V “Test Harness” • “Variable” = • client param, • app param, • system param V V “Latency” = ReceiveTime – SendTime “Test Components” “External Components” Session ID: Session Title

  5. % CPU ∑ = ( Overhead X Message rate ) Load svcs (writes/msg) (msg/sec) (Writes/sec) Bottom-Up Rule: Test and tune each component before you test and tune the aggregate. Concepts: Partitioning Resource Usage • Partitionable resources can be broken down as the sum of the contributions of each test component on the system • Total resource usage is limited by system capacity • Becomes the bottleneck as it nears 100% • Goal is linear scalability as additional resource is added • Vertical versus Horizontal scalability • Total latency is the sum across all resource latencies, i.e.: • Latency = CPU_time + Disk_time + Socket_time + wait_sleep Session ID: Session Title

  6. Concepts: Computer Platform Resources CPU time Memory (in use, swap) # Threads Network I/O (send/receive) Disk I/O (read/write) Favorite tools: task mgr,perfmon, top, ping –s, traceroute • Use level of detail appropriate to the question being asked • Machine resource (such as %CPU) artifacts: • side effects from uncontrolled applications • timing of refresh interval • correlation with test intervals • Scaling across different platforms and resource types Session ID: Session Title

  7. Expert Tip: Schedule daily meetings to share results and reprioritize test plan. The Performance Engineering Project • For each iteration • Test performance vs goal • Identify likeliest area for gain Test Analyze Tune • Startup tasks • Define project completion goals • Staff benchmarking skills • Acquire test harness The Project is Goal Driven Session ID: Session Title

  8. Performance Project Skills Requirements Expert • SLA/QoS levels – minimal & optimal • Predicted load – current & future • Distribution topology Integration Expert • Allowable design options • Cost to deploy • Cost to maintain Testing Expert • Load generation tool • Bottleneck diagnosis • Tuning and optimization R.E. Cost/Benefit Load/Distribution SOLUTION I.E. T.E. Design Options Session ID: Session Title

  9. Component Platform Tools for a Messaging Benchmark System Under Test • Configurator – creates conditions to bring system under test into known state • Harness – the platforms and components whose performance response is being measured • Analyzer – tools and procedures to make meaningful conclusions based on result data. Test Analyzer Test Harness Test Configurator Session ID: Session Title

  10. Design Application Tuning 1 2 3 4 5 6 7 8 Performance Project Timeline System Test Development Project Deployment Plan Service Dev Sizing Process Dev Performance Project Perf Prototype Launch Week Session ID: Session Title

  11. Agenda Analyzing, testing and tuning JMS/ESB performance • Methodology • Analysis • Testing • Tuning Session ID: Session Title

  12. Performance Analysis • Performance scenarios • Requirements • Goals • System characterization • Platforms • Architecture • Test cases Session ID: Session Title

  13. Rule of Thumb: Focus on broker loads over 10 msg/sec or 1 MByte/sec, and service loads over 10,000 per hour. Performance Scenario Specification • First, triage performance-sensitive processes • substantial messaging load and latency requirements • impact of resource-intensive services • Document only process messaging & services • leave out Application specific logic – this is a prototype • Set specific messaging performance goals • Message rate and size • Minimum and average latency required • Try to quantify actual value and risk • Why this use case matters Session ID: Session Title

  14. Example: Performance scenario specification • Overall project scope: • Project goals and process • Deployment environment • System architecture • For each Scenario: • Description of business process • Operational constraints (QoS, recovery, availability) • Performance goals, including business benefits Session ID: Session Title

  15. System Characterization • Scope current and future hardware options available to the project • Identify geographical locations, firewalls and predefined service hosting restrictions • Define predefined Endpoints and Services • Define data models and identify required conversions and transformations. Session ID: Session Title

  16. Rule of Thumb: LAN 15 – 150 MBytes / second Disk: 2 – 10 MBytes / second XSLT: 200 – 300 KBytes / second Platform configuration specification Network bandwidth latency speed Field DMZ DMZ CPU number type speed Memory size speed Firewall cryptos latency Disk type speed Session ID: Session Title

  17. Integration Broker Adapter Adapter ERP Tracking Service SFA SCM SCM Finance PoS CRM CRM Field Front Office Back Office Partner Architecture Spec: Service distribution ESB ESB ESB Partner ESB • Identify services performance characteristics • Identify high-bandwidth message channels • Decide which components can be modified • Annotate with message load estimates Session ID: Session Title

  18. ? Data Schemas 1…n Data Schemas 2…m Expert Tip: Transform tools vary in efficiency: XSLT – slowest (but most standard) Semantic modeler – generally faster (e.g. Sonic DXSI) Custom text service – fastest, but not as flexible Architecture Spec: Data Integration • Approximate the complexity of data schemas • Identify performance critical transformation events • Estimate message size • Identify acceptable potential services Session ID: Session Title

  19. Rule of Thumb: Real-time, 1 KB messages, broker performance is about 1,000 to 10,000 msg/sec for each 1 gHz cpu power. Platform Profile: Real-time messaging System resource limitations 90% 5% % capacity 20% 70% Session ID: Session Title

  20. Rule of Thumb: Queued msgs, 1 KB messages, broker performance is about 100 to 1,000 msg/sec for each 1 gHz cpu power. Platform Profile: Queued requests System resource limitations 50% 85% 40% 20% % capacity Session ID: Session Title

  21. Specifying Test Cases • Factors to include: • Load, sizes, complexity of messaging • Range of scalability to try (e.g. min/max msg size) • Basic ESB Process model • Basic distribution architecture • Details to avoid: • Application code (unless readily available) • Detailed transformation maps • Define relevant variables: • Fixed factors • Test Variables • Dependent measures Session ID: Session Title

  22. Expert Tip: External JMS client variables are easily managed with the Test Harness. Typical test variables • JMS Client variables: • Connection / session usage • Quality of Service • Interaction mode: • Message size and shape • ESB container variables • Service provisioning and parameters • Endpoint / Connection parameters • Process implementation and design Session ID: Session Title

  23. Example: Test Case Specification For each identified Test Case there is a section specifying the following: • Overview of test • How this use case relates to the scenario • Key decision points being examined • Functional definition • How to simulate the performance impact • Description of ESB processes and services • Samples messages • Design alternatives that will be compared • Test definition • Variables manipulated • Variables measured • Completion criteria • Throughput and latency goals • Issues and options that may merit investigation Session ID: Session Title

  24. Agenda Analyzing, testing and tuning JMS/ESB performance • Methodology • Analysis • Testing • Tuning Session ID: Session Title

  25. Testing • Think about the approach you want to take • Bottom-up vs Top-down • Create a controllable environment • Simulate non-essential services • Use a testing tool for simulating clients • Evaluate the results Session ID: Session Title

  26. Broker System Under Test Request Test Harness JNDI Connection Msg Pool Reply Message Generation Simulating clients with Test Harness • File or JNDI connection configuration • Producer / Consumer parameters • Message generation Session ID: Session Title

  27. Expert Tip: Spreadsheets are excellent for documenting and interpreting the results Evaluating the results • Test Harness output: • Throughput (msg/sec) • Latency (msecs per round trip) • Message size (bytes) • System measures: • CPU usage (%usr, %sys) • Disk usage (writes/sec) • Broker metrics: • Messaging rate (bytes/second) • Peak queue size (bytes) Session ID: Session Title

  28. Expert Tip: Spreadsheets are excellent for documenting and interpreting the results Evaluating the results • Evaluate against the goals • Perform a bottleneck analysis • Compare across test runs Session ID: Session Title

  29. Agenda Analyzing, testing and tuning JMS/ESB performance • Methodology • Analysis • Testing • Tuning Session ID: Session Title

  30. The Art of Tuning Pimp my ride! • Art of tuning • Change only 1 parameter at a time • Take your time for it • Always check the same things (even reboot/restart after each run) • Know what you are measuring Session ID: Session Title

  31. Rule of Thumb: On windows platforms, the Sun 1.5.0 JVM is 10% to 50% slower than the default IBM 1.4.2 JVM. Java Tuning Options • ‘Fastest’ JVM depends a little on the application and a lot on the platform • VM heap • Garbage Collection: • default settings good for optimal throughput • use advanced (jdk4 or later) GC options to maximize latency Session ID: Session Title

  32. Broker Tuning Options • Sources of broker CPU cost • CPU cost of i/o • Network sockets • Log/Data disks • Web Service protocol • Web Service security • Security • Authorization • Message or channel encryption Session ID: Session Title

  33. Broker Tuning Options • Sources of broker disk overhead • Message recovery log • Persistent message store • Might not be used if other guarantee mechanisms work • Message data store • Disconnected consumer • Queue save threshold overflow • Flow to disk overflow • Message storage overhead depends on disk sync behavior • Explicit file synchronization ensures data retained on crash • Tuning options at disk, filesystem, JVM and Broker levels Session ID: Session Title

  34. Rule of Thumb: For non-trivial queues, multiply default settings by 10 to 100. Broker Tuning Parameters • Core Resources: • JVM heap size • JVM thread, stack limits • DRA, HTTP and Transaction threads • TCP settings • Message storage: • Queue size and save threshold • Pub/sub buffers • Message log and store • Message management • Encryption • Flow control and flow-to-disk • Dead message queue management • Connections • Mapping to NIC’s • Timeouts • Load balancing Session ID: Session Title

  35. Expert Tip: With CAA configured, Best Effort service is equivalent to At Least Once, with substantially lower overhead. Messaging Tuning Options Implement optimal QoS for speed versus precision (Based on CAA brokers and fault-tolerant connections) Session ID: Session Title

  36. Msg Msg Msg Msg Broker … … Ack Ack Messaging Tuning Options Use message batching to accelerate message streams Consumer • Message transfer overhead is generally fixed • Hidden ack messages amenable to tuning: • AsyncNonPersistent mode decouples ack latency • Transaction Commit allows 1 ack per N messages • DupsOK ack mode allows ‘lazy’ ack from consumer • Pre-Fetch Count allows batched transmit to consumer • ESB Design option: send one multi-part message instead of N individual messages Producer Session ID: Session Title

  37. Point to Point Potential Receiver Sender Queue Potential Receiver Flow Control • Flow Control in PtP: • If the Maximum Size of a Queue is reached, Flow Control is kicking in • Sender seems to be hanging but is only waiting for space in the Queue • Flow Control can be switched off and replaced by an Exception in your code. Session ID: Session Title

  38. Publish and Subscribe Subscriber Publisher Topic Subscriber Flow Control • In Pub/Sub, the working of Flow Control is more difficult to predict • Each Subscriber gets a Buffer (Outgoing Buffer Size) • Sonic will try to only have 1 copy of the message in memory • When the buffer is reaching a threshold, Flow Control is issued. • Threshold depends on message priority Session ID: Session Title

  39. Outgoing Buffer Outgoing Buffer Pub/Sub TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT TEXT Flow Control Ack FLOW CONTROL Only a pointer is inserted in the Buffer but the calculated size is the message size Publish and Subscribe Danger !!! Slow Subscriber may result in the following situation Subscriber Publisher Topic Subscriber Ack Session ID: Session Title

  40. Rule of Thumb: Up to 500 queues per Broker and 10,000 topics per broker. Client Tuning Options • Reuse JMS objects to reduce setup cost • Objects with client and broker footprint • Connection • Session • Sender / Receiver / Publisher / Subscriber • Coding best practices • Share connections across sessions • Share sessions across producers / consumers • Not across threads!! Session ID: Session Title

  41. ESB Tuning Options X-scaling: Multiple Listeners Y-scaling: Multiple JVMs Z-scaling: Multiple Machines Session ID: Session Title

  42. Receive Msg Broker Dispatch Outbox Instantiate Proc Marshall Msg Unmarshall Msg Send Msg Call onMessage … … Dispatch Outbox Unmarshall Msg Marshall Msg Call onMessage … Send Msg ESB Tuning Options Inter-Container Messaging Intra-Container Messaging v7.5: better! faster! Session ID: Session Title

  43. ESB Tuning Options • Transformations • XML Server • BPEL Server • Database Service • DXSI Service • … Session ID: Session Title

  44. In Summary • Know what you’re doing! • If you don’t, get someone in who does! • Use good old plain common sense Session ID: Session Title

  45. For More Information, go to… • PSDN • White paper: Benchmarking Enterprise Messaging Systems • Sonic Test Harness • Documentation: • Progress Sonic MQ Performance Tuning Guide Session ID: Session Title

  46. Questions? Session ID: Session Title

  47. Thank you foryour time Session ID: Session Title

More Related