1 / 26

Word Wide Cache

Word Wide Cache. Distributed Caching for the Distributed Enterprise. Agenda. Introduction to distributed caching Scenario for using caching Caching for the Virtual Organization. The market need.

garry
Télécharger la présentation

Word Wide Cache

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Word Wide Cache Distributed Caching for the Distributed Enterprise

  2. Agenda • Introduction to distributed caching • Scenario for using caching • Caching for the Virtual Organization

  3. The market need “The major initiatives are building up our low-latency infrastructure, moving toward a service-oriented architecture (SOA) and leveraging grid computing”. Sharon Reed (CTO for global markets trading technology Merrill Lynch)

  4. What is Distributed Caching • An in-memory data store that can be shared between distributed applications in a real-time fashion.

  5. What’s New • Background • Memory capacity increased dramatically in recent years. • The NetEffect open the opportunity to create a virtual memory Grid. • Many applications are seeking for ways to utilize the available memory resources for performance boosting. • The Need: • Managing a memory resource in a reliable, transactional manner is extremely complex. • Applications are seeking for a generic infrastructure for utilizing memory resource to reduce data access overhead in a distributed environment.

  6. Why using Distributed Caching • Scalability • Reduce centralized data bottleneck • Enable scaling of • Performance • Reduce I/O overhead – bringing data closer to the application using it. • Provide in-memory speed. • Reliability • Use the cache as a reliable data store. • Real Time content distribution • Used for integration and synchronization purposes

  7. RDBMS (JDBC, JDO) Application Before: Reliability – Centralized DB Application Session info User • Limitations: • Performance • Scalability

  8. After: Reliability – Dist. Caching Application User Session info • Benefits: • Performance • Scalability Application

  9. RDBMS (JDBC, JDO) Before: ScalabilitywithCentralized DB Application Telephone Operating Service Bottleneck Peek load of users Load Balancer Application

  10. After: Scalability with Distributed Cache Application Telephone Operating Service Peek load of users Load Balancer Application

  11. Master / Local Cache Replicated Cache Partitioned Cache Distributed Caching Topologies

  12. Content distribution Distributed Caching Network • Content based routing (No need for static queues) • Passing content and functionality • Dynamic Orchestration (without changing the application) Publisher Content Based Routing JCA MDB JNI RMI .Net, C++ Java Fat Client J2EE Subscribers

  13. SBA – Virtual Middleware Single virtualization technology for caching and messaging Clustered Space Virtual Table • Same data can be viewed through different interfaces! • A single runtime for maintaining scalability, redundancy across all systems • Reduces both the maintenance overhead and development complexity • Provides Grid capabilities to EXISTING applications JDBC JMS Applications Virtual Topic/ Queue Space JCache Virtual Middleware

  14. Caching Edition Middleware Virtualization Parallel Processing Messaging Bus Distributed Caching Distributed Shared Memory (JavaSpaces) Common Clustering Architecture Service Grid On demand computing resources with optimization for commodity server setup GigaSpaces EAG Caching Edition Grid Application Server & Distributed Caching (2005) From JavaSpaces (2001-2003)

  15. Case - Study 1. Distributed Session Sharing 2. A geographically distributed trading application case study

  16. Simple Example: Distributed Caching Session Sharing between multiple Mobile applications • Session Sharing • Fail Over • Replication • Load Balancing

  17. Background –Trading Applications • Trading clients allow "traders" to monitor the market and submit trades. • Read/write ratio is extremely high • Events have to be delivered in as close to real-time as possible. • Traditional approaches used mostly messaging (IIOP, JMS, Sockets) to implement such system.

  18. Caching for the Virtual Enterprise • Market View • Quote Management • Hit manager • Credit manager • Session Manager NY • Maintain Local Cache of the market view • Maintain Session Object and profile through leasing. • Use master worker patter to execute logic on the server session. Replicated Cache with Partitioned ownership Tokyo London Order Book Application

  19. Challenges: Bandwidth Replication NY London 10Mbs • Solution: • Batching • Compression • Async replication • Data is kept local • Update are local based on ownership

  20. Challenges: Reliability Sync Replciation Within site Primary Primary 10Mbs ASync Replication between sites Backup Backup NY London

  21. Scaling through Partitioning NY1 London1 WAN London 2 NY2 ASync Replication between sites per partition NY London Load Balancing Partition the data within site

  22. Challenges: Sync with Ex Db Load data to the cache from the external data source in case it is not in the cache update load update load DB DB Use the replication channel to perform reliable async replication to external data base (Sybase) NY London

  23. Challenges: Data Distribution Primary Primary 10Mbs • Event driven on trade updates • Aggregation of events from all sites • Supports unicast / Multicast • Server side filtering Backup Backup NY London

  24. Challenges: Distributed Query “Select xx from..” • Provide SQL and Id based query • Partition data based on content • Distribute query based on ownership Partitioned Cache Order Book Application

  25. Challenges: Security • SSO – (Single Sign On) • Provides authentication and authorization • Authorization can be based on content and operation • Replication filters enable filtering of data between sites based on content. • Designed with minimal performance in mind Partitioned Cache Order Book Application

  26. Summary • Distributed caching solves performance, scalability, and reliability of distributed applications. • It is a major piece in any grid deployment.

More Related