1 / 58

DB2 pureScale Overview

Brian Olynyk DB2 Lab Outreach and Emerging Technologies. DB2 pureScale Overview. Agenda. Industry Direction Where Does DB2 pureScale Come From? Transparent Application Scalability High Availability Competitive Comparison for Availability Competitive Comparison for Scalability.

sherri
Télécharger la présentation

DB2 pureScale Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Brian Olynyk DB2 Lab Outreach and Emerging Technologies DB2 pureScale Overview

  2. Agenda • Industry Direction • Where Does DB2 pureScale Come From? • Transparent Application Scalability • High Availability • Competitive Comparison for Availability • Competitive Comparison for Scalability

  3. Critical IT Applications Need Reliability and Scalability • Local Databases are Becoming Global • Successful global businesses must deal with exploding data and server needs • Competitive IT organizations need to handle rapid change Customers need a highly scalable, flexible solution for the growth of their information with the ability to easily grow existing applications • Down-time is Not Acceptable • Any outage means lost revenue and permanent customer loss • Today’s distributed systems need reliability

  4. IT Needs to Adapt in Hours…Not Months • Handling Change is a Competitive Advantage • Dynamic Capacity is not the Exception • Over-provisioning to handle critical business spikes is inefficient • IT must respond to changing capacity demand in days, not months Businesses need to be able grow their infrastructurewithout adding risk • Application Changes are Expensive • Changes to handle more workload volume can be costly and risky • Developers rarely design with scaling in mind • Adding capacity should be stress free

  5. DB2 pureScale • Unlimited Capacity • Buy only what you need, add capacity as your needs grow • Application Transparency • Avoid the risk and cost of application changes • Continuous Availability • Deliver uninterrupted access to your data with consistent performance Learning from the undisputed Gold Standard... System z

  6. Where Does DB2 pureScale Come From?

  7. DB2 for z/OS Data Sharing is the Gold Standard • Everyone recognizes DB2 for z/OS as the “Gold” standard for scalability and high availability • Even Oracle agrees: • Why? • The Coupling Facility!! • Centralized locking, centralized buffer pool deliver superior scalability and superior availability • The entire environment on z/OS uses the Coupling Facility • CICS, MQ, IMS, Workload Management, and more

  8. Automatic workload balancing Cluster of DB2 nodes running on Power servers Leverages the global lock and memory manager technology from z/OS Integrated Tivoli System Automation InfiniBand network & DB2 Cluster Services Shared Data DB2 pureScale Architecture

  9. Member Member Member Member CS CS CS CS CS Cluster Interconnect CS CS DB2 pureScale : Technology Overview Clients • Clients connect anywhere,… … see single database • Clients connect into any member • Automatic load balancing and client reroute may change underlying physical member to which client is connected Single Database View • DB2 engine runs on several host computers • Co-operate with each other to provide coherent access to the database from any member • Integrated cluster services • Failure detection, recovery automation, cluster file system • In partnership with STG and Tivoli • Low latency, high speed interconnect • Special optimizations provide significant advantages on RDMA-capable interconnects (eg. Infiniband) • PowerHA pureScale technology • Efficient global locking and buffer management • Synchronous duplexing to secondary ensures availability Log Log Log Log 2nd-ary Primary Shared Storage Access • Data sharing architecture • Shared access to database • Members write to their own logs on shared disk • Logs accessible from another host (used during recovery) Shared Database

  10. db2 agents & other threads db2 agents & other threads log buffer, dbheap, & other heaps log buffer, dbheap, & other heaps bufferpool(s) bufferpool(s) Primary Secondary What is a Member ? Member 0 Member 1 • A DB2 engine address space • i.e. a db2sysc process and its threads • Members Share Data • All members access the same shared database • Aka “Data Sharing” • Each member has it’s own … • Bufferpools • Memory regions • Log files • Members are logical.Can have … • 1 per machine or LPAR (recommended) • >1 per machine or LPAR (not recommended for production) • Member != Database Partition • Member = db2sysc process • Database Partition = a partition of the database db2sysc process db2sysc process log buffer, dbheap, & other heaps log buffer, dbheap, & other heaps bufferpool(s) bufferpool(s) Log Log Shared database (Single database partition)

  11. Member 0 Member 1 db2sysc process db2sysc process db2 agents & other threads db2 agents & other threads log buffer, dbheap, & other heaps log buffer, dbheap, & other heaps log buffer, dbheap, & other heaps bufferpool(s) bufferpool(s) bufferpool(s) bufferpool(s) bufferpool(s) Primary GBP GLM SCA Secondary Log Log Shared database (Single database partition) What is a Caching Facility? • Software technology that assists in global buffer coherency management and global locking • Derived from System z Parallel Sysplex & Coupling Facility technology • Software based • Services provided include • Group Bufferpool (GBP) • Global Lock Management (GLM) • Shared Communication Area (SCA) • Members duplex GBP, GLM, SCA state to both a primary and secondary • Done synchronously • Duplexing is optional (but recommended) • Set up automatically, by default log buffer, dbheap, & other heaps

  12. Member 0 Member 1 db2sysc process db2sysc process db2 agents & other threads db2 agents & other threads log buffer, dbheap, & other heaps log buffer, dbheap, & other heaps log buffer, dbheap, & other heaps bufferpool(s) bufferpool(s) bufferpool(s) bufferpool(s) bufferpool(s) Primary Secondary Log Log Shared database (Single database partition) Cluster interconnect • Requirements • Low latency, high speed interconnect between members, and the primary and secondary PowerHA pure scale servers • RDMA capable fabric • To make direct updates in memory without the need to interrupt the CPU • Solution • InfiniBand (IB) and uDAPL for performance • InfiniBand supports RDMA and is a low latency, high speed interconnect • uDAPL to reduce kernel time in AIX Interconnect

  13. db2 agents & other threads db2 agents & other threads log buffer, dbheap, & other heaps log buffer, dbheap, & other heaps bufferpool(s) bufferpool(s) bufferpool(s) bufferpool(s) Primary Secondary Member 0 Member 1 Cluster file system db2sysc process db2sysc process • Requirements • Shared data requires shared disks and a cluster file system • Fencing of any failed members from the file system • Solution • General Parallel File System – GPFS • Shipped with, and installed and configured as part of DB2 • We will also support a pre-existing user managed GPFS file system • Allows GPFS to be managed at the same level across the enterprise • DB2 will not manage this pre-existing file system, nor will it apply service updates to GPFS. • SCSI 3 Persistent Reserve recommended for rapid fencing log buffer, dbheap, & other heaps bufferpool(s) Cluster File System Log Log GPFS Shared database (Single database partition)

  14. db2 agents & other threads db2 agents & other threads log buffer, dbheap, & other heaps log buffer, dbheap, & other heaps bufferpool(s) bufferpool(s) bufferpool(s) bufferpool(s) CS CS Primary Secondary CS Log DB2 Cluster Services Member 0 Member 1 db2sysc process db2sysc process • Orchestrate • Unplanned event notifications to ensure seamless recovery and availability. • Member, PowerHA pureScale, AIX, hardware, etc. unplanned events. • Planned events • ‘Stealth’ maintenance • Hardware and software • Integrates with: • Cluster Management • TSA (Tivoli System Automation) • Cluster File System • GPFS (General Parallel File System) • TSA and GPFS are shipped with, and installed and configured as part of the DB2 pureScale Feature log buffer, dbheap, & other heaps bufferpool(s) Log Shared database (Single database partition)

  15. Unlimited Capacity DB2 pureScale has been designed to grow to whatever capacity your business requires Flexible licensing designed for minimizing costs of peak times Only pay for additional capacity when you use it even if for only a single day Need more… just deploy another server and then turn off DB2 when you’re done. Issue:All year, except for two days, the system requires 3 servers of capacity. Solution:Use DB2 pureScale and add another server for those two days, and only pay sw license fees for the days you use it. Over 100+ node architecture validation has been run by IBM

  16. Proof of DB2 pureScale Architecture Scalability How far will it scale? Take a web commerce type workload Read mostly but not read only Don’t make the application cluster aware No routing of transactions to members Demonstrate transparent application scaling Scale out to the 128 member limit and measure scalability

  17. 128 Members 84% Scalability 112 Members 89% Scalability 88 Members 90% Scalability 64 Members 95% Scalability 32 Members Over 95% Scalability 16 Members Over 95% Scalability The Result 2, 4 and 8 Members Over 95% Scalability Validation testing includes capabilities to be available in future releases.

  18. Dive Deeper into a 12 Member Cluster Looking at more challenging workload with more updates 1 update transaction for every 4 read transactions Typical read/write ratio of many OLTP workloads No cluster awareness in the application No routing of transactions to members Demonstrate transparent application scaling Redundant system 14 8-core p550s including duplexed PowerHA pureScale™ Scalability remains above 90%

  19. Scalability for OLTP Applications Relative Scalability Number of Members in the Cluster

  20. DB2 pureScale is Easy to Deploy Single installation for all components Monitoring integrated into Optim tools Single installation for fixpaks & updates Simple commands to add and remove members

  21. Installation Video

  22. Transparent Application Scaling

  23. Application Transparency Take advantage of extra capacity instantly No need to modify your application code No need to tune your database infrastructure Your DBAs can add capacity without re-tuning or re-testing Your developers don’t even need to know more nodes are being added

  24. Transparent Application Scalability • Scalability without application or database partitioning • Centralized locking and real global buffer pool with RDMA access results in real scaling without making application cluster aware • Sharing of data pages is via RDMA from a true shared cache • not synchronized access via process interrupts between servers) • No need to partition application or data for scalability • Resulting in lower administration and application development costs • Distributed locking in RAC results in higher overhead and lower scalability • Oracle RAC best practices recommends • Fewer rows per page (to avoid hot pages) • Partition database to avoid hot pages • Partition application to get some level of scalability • All of these result in higher management and development costs

  25. High Availability

  26. The Key to Scalability and High Availability Efficient Centralized Locking and Caching As the cluster grows, DB2 maintains one place to go for locking information and shared pages Optimized for very high speed access DB2 pureScale uses Remote Direct Memory Access (RDMA) to communicate with the powerHA pureScale server No IP socket calls, no interrupts, no context switching Results Near Linear Scalability to large numbers of servers Constant awareness of what each member is doing If one member fails, no need to block I/O from other members Recovery runs at memory speeds Member 1 Member 2 Member 3 CF Group Buffer Pool PowerHA pureScale Group Lock Manager

  27. Continuous Availability Protect from infrastructure outages Architected for no single point of failure Automatic workload balancing Duplexed secondary global lock and memory manager Tivoli System Automation automatically handles all component failures DB2 pureScale stays up even with multiple node failures Shared disk failure handled using disk replication technology

  28. Recover Instantaneously From Node Failure • Protect from infrastructure related outages • Redistribute workload to surviving nodes immediately • Completely redundant architecture • Recover in-flight transactions on failing node in as little as 15 seconds including detection of the problem Application Servers andDB2 Clients

  29. Minimize the Impact of Planned Outages • Keep your system up • During OS fixes • HW updates • Administration Identify Member Do Maintenance Bring node back online

  30. Online Recovery DB2 DB2 DB2 DB2 Log Log Log Log Database member failure Only data in-flight updates locked during recovery 100 % of Data Available 50 Time (~seconds) • DB2 pureScale design point is to maximize availability during failure recovery processing • When a database member fails, only in-flight data remains locked until member recovery completes • In-flight = data being updated on the failed member at the time it failed • Target time to row availability • <20 seconds Shared Data CF CF

  31. Steps Involved in DB2 pureScale Member Failure Failure Detection Recovery process pulls directly from CF: Pages that need to be fixed Location of Log File to start recovery from Restart Light Instance performs redo and undo recovery

  32. Failure Detection for Failed Member DB2 has a watchdog process to monitor itself for software failure The watchdog is signaled any time the DB2 member is dying This watchdog will interrupt the cluster manager to tell it to start recovery Software failure detection times are a fraction of a second The DB2 cluster manager performs very low level, sub second heart beating (with negligible impact on resource utilization) DB2 cluster manager performs other checks to determine congestion or failure Result is hardware failure detection in under 3 seconds without false failovers

  33. Member Failure Summary Clients kill -9 CS CS CS CS Log Records Pages Log Log Log Log CS CS Run in Slide Show • Member Failure • DB2 Cluster Services automatically detects member’s death • Inform other members, and CFs • Initiates automated member restart on same or remote host • Member restart is like crash recovery in a single system, but is much faster • Redo limited to in-flight transactions • Benefits from page cache in CF • Client transparently re-routed to healthy members • Other members fully available at all times – “Online Failover” • CF holds update locks held by failed member • Other members can continue to read and update data not locked for update by failed member • Member restart completes • Locks released and all data fully available Single Database View DB2 DB2 DB2 DB2 CF CF Updated Pages Global Locks Updated Pages Global Locks Shared Data Secondary CF Primary CF

  34. Competitive Comparison for Availability

  35. Steps involved in a RAC node failure Node failure detection Data block remastering Locking of pages that need recovery Redo and undo recovery Unlike DB2 pureScale, Oracle RAC does not centralize lock or data cache

  36. With RAC – Access to GRD and Disks are Frozen Global Resource Directory (GRD) Redistribution Instance 1 fails No more I/O until pages that need recovery are locked No Lock Updates Instance 1 Instance 2 Instance 3 GRD GRD GRD I/O Requests are Frozen

  37. With RAC – Pages that Need Recovery are Locked redo log redo log redo log Recovery Instance reads log of failed node Instance 1 fails Instance 1 Instance 2 Instance 3 GRD GRD GRD Must read log andlock pages before freeze is lifted. I/O Requests are Frozen x x x x x x Recovery instance locks pages that need recovery

  38. CF Central Lock Manager DB2 pureScale – No Freeze at All CF always knows what changes are in flight Member 1 fails Member 1 Member 2 Member 3 x x x x x x No I/O Freeze x x x x x x CF knows what rows on these pages had in-flight updates at time of failure

  39. Competitive Comparison for Scalability

  40. Oracle RAC - Single Instance Wants to Read a Page Process on Instance 1 wants to read page 501 mastered by instance 2 System process checks local buffer pool: page not found System process sends an IPC to the Global Cache Service process to get page 501 Context Switch to schedule GCS on a CPU GCS copies request to kernel memory to make TCP/IP stack call GCS sends request over to Instance 2 IP receive call requires interrupt processing on remote node Remote node responds back via IP interrupt to GCS on Instance 1 GCS sends IPC to System process (another context switch to process request) System process performs I/O to get the page Instance 1 Instance 2 Buffer Cache Buffer Cache 6 501 501 1 5 system 4 2 GCS GCS 3

  41. What Happens in DB2 pureScale to Read a Page Agent on Member 1 wants to read page 501 db2agent checks local buffer pool: page not found db2agent performs Read And Register (RaR) RDMA call directly into CF memory No context switching, no kernel calls. Synchronous request to CF CF replies that it does not have the page (again via RDMA) db2agent reads the page from disk PowerHA pureScale Member 1 CF Buffer Pool Group Buffer Pool 3 501 501 1 4 2 db2agent

  42. The Advantage of DB2 Read and Register with RDMA DB2 agent on Member 1 writes directly into CF memory with: Page number it wants to read Buffer pool slot that it wants the page to go into CF either responds by writing directly into memory on Member 1: That it does not have the page or With the requested page of data Total end to end time for RAR is measured in microseconds Calls are very fast, the agent may even stay on the CPU for the response db2agent CF thread Member 1 CF Direct remote memorywrite with request I want page 501. Put into slot 42 of my buffer pool. 1, Eaton, 10210, SW 2, Smith, 10111, NE 3, Jones, 11251, NW I don’t have it, get it from disk Direct remote memory write of response Much more scalable, does not require locality of data

  43. DB2 pureScale - Two Members Update Same Page Agent on Member 2 makes a Set Lock State (SLS) RDMA call to CF for X-lock on the row and P-lock to indicate the page will be updated The P-Lock contends with the lock held my Member 1 GLM tells Member 1 to release its P-lock Member 1 completes the update and has the page pulled from its local memory into the GBP via WARM request Note that the P-lock is not required for a transaction boundary Only held for the duration of the time it takes to make the byte changes to the page CF responds via RDMA with grant of lock request and pushes the updated page into Member 2’s memory and invalidates other member copies Page updated Member 1 Buffer Pool 501 501 501 3 db2agent 2 Member 2 CF 4 Buffer Pool GBP 501 501 501 501 501 501 501 5 1 GLM db2agent

  44. A Closer Look at 2 Members Updating the Same Page (Different Rows) UPDATE T1 SET C3 = 111000 WHERE C1 = 1 Member 1 Page 1001 Valid = N P-Lock Release P-Lock and pull page SLS X_row1 P_1001 powerHA pureScale CF Page 1001 RaR 1001 RaR 1001 Member 2 Page 1001 P-Lock Page 1001 Member 1 Member 2 SLS X_row4 P_1001 UPDATE T1 SET C4 = SE WHERE C1 = 4

  45. The Same Updates in Oracle – Why RAC Needs Locality of Data UPDATE T1 SET C3 = 111000 WHERE C1 = 1 I’d like page 1001 for Update Instance 1 Page 1001 Page X lock Send page 1001 to Node 2 I’d like page 1001 for Update Got it Read it from disk Instance 3 – master for page 1001 Got it Instance 2 Page 1001 Page X lock Send page 1001 to Node 1 Got it I’d like page 1001 for Update UPDATE T1 SET C4 = SE WHERE C1 = 4

  46. The Same Updates in Oracle – Why RAC Needs Locality of Data UPDATE T1 SET C3 = 111000 WHERE C1 = 1 I’d like page 1001 for Update I’d like page 1001 for Update Instance 1 I’d like page 1001 for Update Page 1001 Page X lock Send page 1001 to Node 2 Send page 1001 to Node 2 I’d like page 1001 for Update Got it Send page 1001 to Node 2 Send page 1001 to Node 2 Got it Got it Got it Read it from disk Instance 3 – master for page 1001 Got it Instance 2 Page 1001 Page X lock Send page 1001 to Node 1 Send page 1001 to Node 1 Send page 1001 to Node 1 Got it Send page 1001 to Node 1 Got it Got it Got it I’d like page 1001 for Update I’d like page 1001 for Update I’d like page 1001 for Update UPDATE T1 SET C4 = SE WHERE C1 = 4 I’d like page 1001 for Update

  47. Scalability Differences Oracle RAC must lock a page whenever there is the intent to update that page DB2 pureScale must lock a page whenever rows are actually being changed on that page DB2 pureScale improves concurrency between members in a cluster which results in better scalability and less of a need for locality of data

  48. Summary – What can DB2 pureScale Do For You? Deliver higher levels of scalability and superior availability Better concurrency during regular operations Better concurrency during member failure Result in less application design and rework for scalability Improved SLA attainment Better consolidation platform – sized to fit, not one size fits all Lower overall costs for applications that require high transactional performance and ultra high availability

  49. Backup

More Related