1 / 40

Replication Solutions in the IBM Informix Dynamic Server

Informix User Forum 2005 Moving Forward With Informix. Replication Solutions in the IBM Informix Dynamic Server. Speaker Name Madison Pruet / IBM. Atlanta, Georgia December 8-9, 2005. Primary. Secondary.

homer
Télécharger la présentation

Replication Solutions in the IBM Informix Dynamic Server

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Informix User Forum 2005Moving Forward With Informix Replication Solutions in the IBM Informix Dynamic Server Speaker Name Madison Pruet / IBM Atlanta, Georgia December 8-9, 2005

  2. Primary Secondary Replication Offerings in IDS HDR Enterprise Replication

  3. Provides single primary and single secondary Allows configurable source(s)/target(s) and supports peer-to-peer Primary and secondary must run thesame executables and have similardisk layout – they are mirror images Source/target do not have to be thesame – they do not have to be mirrorimages Secondary restricted to ‘dirty read’report processing Allows full transactional usage of both source and target Simple to set up and administer Setup and admin. more complex (pre-v10) Does not support blobspace blobs Supports blobspace blobs Replication can be synchronous Replication is asynchronous Primary purpose is for highavailability Primary purpose is for data distribution How HDR and ER differ? HDR ER

  4. Primary Secondary Highly Available Replication (HDR)

  5. Primary Secondary ontape –s –L 0 / onbar –b –L 0 onmode –d primary Server-B A B Logs ontape –p / onbar –r -p onmode –d secondary Server-A 1) Backup made of primary server 2) Notify primary server identity of secondary 3) Physical restore backup on secondary server 4) Notify secondary server identity of primary 5) Logical log transmission begins HDR Setup Primary

  6. Log Buffers are transferred to Secondary Primary Secondary A Network Link For log Transmission B Users reading and updating database Can be Updated Dirty Reads Allowed Running Reports HDR – Normal Operations

  7. Down onmode –d standard Secondary Standard A B Users reading and updating database Running Reports HDR Availability - Failover (Not Any More!!)

  8. oninit Primary Down ontape -p onmode –d secondary Server-A onmode -s Quiescent Secondary Standard A B 1) Bring Server-B to quiescent state 2) Switch Server-B back to secondary mode 3) Physically Restore failed system (if media failure) or Restart primary server (no media failure) 4) Possibly restore unloaded logs 5) Transfer current logs from secondary and resume HDR Restart

  9. Recent Changes to HDR • Support of logged SBLOBS • Support of TimeSeries • Transfer of index if corruption encountered • Reimplementation of DRAUTO • Group Name Usage • Probably need to set INFORMIXCONRETRY and INFORMIXCONTIME • INFORMIXCONTIME – TCP timeout value in seconds • INFORMIXCONRETRY – number of times to retry the TCP connect

  10. Demo 1 Time

  11. Enterprise Replication (ER/CDR)

  12. New ER features for v10 • Major Features • Schema Evolution Support • Initial Sync/Resync Support • Enabling cdr sync and cdr check in v10UC4 • Template Support • Supporting Functionality • Mastered Replicates • Shadow Replicates • Miscellaneous • Always Apply Conflict Resolution • Ignore Deletes

  13. Mastered Replicates ER maintains a dictionary defining the transport format Can be strict (column names must match) or non-strict Dictionary obtained by selecting information from one of the catalogues Provides better support for replicate consistency Local table definition checked against the schema definition within ER Strictly mastered replicates (name verification) simplify schema evolution When performing schema evolution, the local format of the row and the replicated format of the row may differ Dictionary obtained from the table definition at ‘define replicate’ time Classic replicates (pre v10) can be converted to mastered replicates Required for Schema Evolution Support

  14. Shadow Replicates Provides a seamless means of converting the replicate characteristics, including the projection list Provides replication between a subset of the participants of a primary replicate The shadow can be swapped with its primary Can have some different attributes from the primary replicate Primary replicate might be using timestamp conflict resolution, but shadow might be always apply Can contain different columns from the primary replicate Must be in the same state as the primary replicate Provides supporting functionality for other functions Remastering Sync Schema Evolution

  15. Defining a Mastered Replicate Identifies the node to be used as the source of the CDR catalogue Used to create an empty mastered replicate Create a strict mastered replicate Perform verification only Automatically create non-existing objects usage: cdr define repl [-c server] [-vuiAFIORTm] -C rule(s) [-M master] [-S scope] [-n y/n] [-t] [-a time] [-e intvl] [-f y|n] [-D y/n] repl participant -M --master=<node> define master replicate -t --empty Empty mastered replicate -n --name=y|n mastered replicate name verification -v --verify verify the existing replicates using master definition -u --autocreate automatically create tables if they do not exists -a time --at=time replicate at specified time -c server --connect=server connect to server -e intvl --every=intvl replicate every intvl minutes -i --immediate continuous replication (default) -A --ats aborted transaction spooling -C rule[,rule] --conflict=rule[,rule] conflict resolution rule(s) -F --floatcanon transfer floating point in canonical form (deprecated) -I --floatieee transfer floating point in IEEE form (recommended) -O --optimize don't call procedure unless different server -R --ris row information spooling -T --firetrigger fire triggers when replicating -S scope --scope=scope scope of conflict resolution (row or trans) -f y|n --fullrow y|n Enable/Disable sending of full row for updates -m <primary repl> --mirrors <primary repl> Define a shadow replicate -D y|n --ignoredel y|n do not process any deletes on targets

  16. Adding a participant to an existing mastered replicate usage: cdr change repl [-c server] [-a | -d] [-v | -u] replicate participant -c server --connect=server connect to server -a --add add participant -d --del remove participant -v --verify verify the existing replicates using master definition -u --autocreate automatically create tables if they do not exists

  17. Defining a Shadow Replicate Name of the primary replicate for this shadow replicate usage: cdr define repl [-c server] [-vuiAFIORTm] -C rule(s) [-M master] [-S scope] [-n y/n] [-t] [-a time] [-e intvl] [-f y|n] [-D y/n] repl participant -M --master=<node> define master replicate -t --empty Empty mastered replicate -n --name=y|n mastered replicate name verification -v --verify verify the existing replicates using master definition -u --autocreate automatically create tables if they do not exists -a time --at=time replicate at specified time -c server --connect=server connect to server -e intvl --every=intvl replicate every intvl minutes -i --immediate continuous replication (default) -A --ats aborted transaction spooling -C rule[,rule] --conflict=rule[,rule] conflict resolution rule(s) -F --floatcanon transfer floating point in canonical form (deprecated) -I --floatieee transfer floating point in IEEE form (recommended) -O --optimize don't call procedure unless different server -R --ris row information spooling -T --firetrigger fire triggers when replicating -S scope --scope=scope scope of conflict resolution (row or trans) -f y|n --fullrow y|n Enable/Disable sending of full row for updates -m <primary repl> --mirrors <primary repl> Define a shadow replicate -D y|n --ignoredel y|n do not process any deletes on targets

  18. Shadow Replicates (with schema evolution) B A C A table has been altered.A shadow of the primary replicateis created and then the two areswapped. This allows each node to receive in either the primary or the shadow.

  19. How shadow replicates are used with resync B A C Resync Data for thereplicate uses shadow

  20. Swapping the shadow with the primary icdr swap shadow -x usage: cdr swap shadow [-c server] -p name -P ID -s name -S ID -p --primaryname=xxx name of primary replicate -P --primaryid=XXX id of primary replicate -s --shadowname=yyy name of shadow replicate -S --shadowid=YYY id of shadow replicate N.B. As part of the swap, then name of the primary and the shadow replicate will be swapped. The replicate id will not. Use cdr list replicate to get the replicate id.

  21. Converting a classic replicate into a mastered replicate Must be done prior to performing any alter on a replicated table First create a shadow replicate for the existing primary replicate Then swap the shadow with the primary replicate Once swap is performed, remove the shadow

  22. Setup diagram for demos HDR serv1prim/serv1sec (cdr1) CorporateHeadquarters N.B. Any node within the domain can replicate to any other node within the domain. They do not have to be directly connected. ER ER ER Regional Offices serv4 (cdr4) serv2 (cdr2) serv3 (cdr3)

  23. Sqlhost file for using ER with HDR HDR pair configured in one group cdr1 group - - i=1 serv1prim onsoctcp linuxt40 ids.serv1prim g=cdr1 serv1sec onsoctcp linuxt40 ids.serv1sec g=cdr1 cdr2 group - - i=2 serv2 onsoctcp linuxt40 ids.serv2 g=cdr2 cdr3 group - - i=3 serv3 onsoctcp linuxt40 ids.serv3 g=cdr3 cdr4 group - - i=4 serv4 onsoctcp linuxt40 ids.serv4 g=cdr4

  24. Demo 2 Time

  25. Templates Provides a means for deployment of many tables as a unit Simplifies Administration Creates a Replicate Set Reduces the Effort to setup Replication Defining the template Sets the tables within the template – can be entire database Defines the replication attributes Generates mastered replicates for the members of the template Realizing the template Creates all of the participants on the node Can create the tables and/or database on the nodes Verifies that all of the columns match properly

  26. First create stores_demo database on cdr1 > dbaccessdemo -log DBACCESS Demonstration Database Installation Script Dropping existing stores_demo database .... Creating stores_demo database .... Lockmode set. Database created. Database closed. … > dbaccess stores_demo - Database selected. > alter table msgs add constraint primary key (number, lang); Table altered. We will create the databaseon server cdr1 only. We will later use the autocreatefunction to generate thedatabase and tables on theother nodes. The “msgs” table needs to have a primary key.

  27. Then create the ‘stores’ template on the database Empty Mastered replicates are created within thetemplate > cdr define template stores -C always -S row -M cdr1 -d stores_demo –a Obtaining dictionary for stores_demo@cdr1:''mpruet''.customer Obtaining dictionary for stores_demo@cdr1:''mpruet''.orders Obtaining dictionary for stores_demo@cdr1:''mpruet''.manufact Obtaining dictionary for stores_demo@cdr1:''mpruet''.stock Obtaining dictionary for stores_demo@cdr1:''mpruet''.items Obtaining dictionary for stores_demo@cdr1:''mpruet''.state Obtaining dictionary for stores_demo@cdr1:''mpruet''.call_type Obtaining dictionary for stores_demo@cdr1:''mpruet''.cust_calls Obtaining dictionary for stores_demo@cdr1:''mpruet''.msgs Obtaining dictionary for stores_demo@cdr1:''mpruet''.catalog Creating mastered replicate stores_serv1_1_1_customer for table ''mpruet''.customer Creating mastered replicate stores_serv1_1_2_orders for table ''mpruet''.orders Creating mastered replicate stores_serv1_1_3_manufact for table ''mpruet''.manufact Creating mastered replicate stores_serv1_1_4_stock for table ''mpruet''.stock Creating mastered replicate stores_serv1_1_5_items for table ''mpruet''.items Creating mastered replicate stores_serv1_1_6_state for table ''mpruet''.state Creating mastered replicate stores_serv1_1_7_call_type for table ''mpruet''.call_type Creating mastered replicate stores_serv1_1_8_cust_calls for table ''mpruet''.cust_calls Creating mastered replicate stores_serv1_1_9_msgs for table ''mpruet''.msgs Creating mastered replicate stores_serv1_1_10_catalog for table ''mpruet''.catalog Server where dictionaryis obtained Generate Template usingall tables within stores_demodatabase

  28. What are the characteristics of a Template? Contains the table dictionary of the members of the template Contains the column attributes of the tables within the template Provides a description of the network format of the replicate Generates empty replicates for each of the tables within the template When realized, can generate the tables on the nodes being realized The template is an extension of the replicate set. Therefore, if a template is created of a database, then all of the replicated tables will be part of that replicate set. This means that further administrative tasks, such as ‘cdr check’, can be performed on the set.

  29. Finally Realize the template We are ‘realizing’ the stores template on the four servers, usingthe data on cdr1 to create an initial sync. By using the –u option,we are requesting that any missing tablebe automatically created from themastered replicate dictionary. cdr realize template stores -S cdr1 -u cdr2 cdr3 cdr4 Verification of stores_demo@cdr2:'mpruet'.customer started Verification of stores_demo@cdr2:'mpruet'.customer is successful Verification of stores_demo@cdr3:'mpruet'.customer started Verification of stores_demo@cdr3:'mpruet'.customer is successful Verification of stores_demo@cdr4:'mpruet'.customer started Verification of stores_demo@cdr4:'mpruet'.customer is successful Verification of stores_demo@cdr2:'mpruet'.orders started Verification of stores_demo@cdr2:'mpruet'.orders is successful … Creating table... create table 'mpruet'.customer ( customer_num serial not null, fname char(15), lname char(15), company char(20), address1 char(20), address2 char(20), city char(15), state char(2), zipcode char(5), …

  30. And let’s see if it worked… DISPLAY: Next Restart Exit Display next page of results. ----------------------- stores_demo@serv4 ------ Press CTRL-W for Help -------- customer_num 101 fname Ludwig lname Pauli company All Sports Supplies address1 213 Erstwild Court address2 city Sunnyvale state CA zipcode 94086 phone 408-789-8075 N.B. – we initially only createdthe stores_demo database oncdr1, now it is on serv4/cdr4

  31. Demo 3 Time

  32. cdr check Provides a means of checking if replicated tables are synchronized Uses a checksum algorithm to determine if a row is not ‘in sync’ Optionally can automatically repair the discrepancy if desired Produces a report of rows not synchronized Can be run against a replicate as well as a replicate set Can be run against multiple nodes Detects extra rows or missing rows N.B.  not activated until v10UC4 N.B.  to use in v10UC4, the checksum UDRs must be installed (will be available in IBM developer’s works)

  33. Problem --- data is not consistent 102 is missing? cdr2 cdr4 cdr1 customer_num lname 101 Paul 102 Sadler 103 Currie 104 Higgins 105 Vector 106 Watson 107 Ream 108 Quinn 109 Miller 110 Jaeger 111 Keyes 112 Lawson 113 Beatty 114 Albertson customer_num lname 101 Pauli 103 Currie 104 Higgins 105 Vector 106 Watson 107 Ream 108 Quinn 109 Miller 110 Jaeger 111 Keyes 112 Lawson 113 Beatty 114 Albertson customer_num lname 101 Pauli 102 Sadler 103 Currie 104 Higgins 105 Vector 106 Watson 107 Ream 108 Quinn 110 Jaeger 111 Keyes 112 Lawson 113 Beatty 114 Albertson Spelling? Extra rows

  34. Running the basic check Run the check on all nodesusing the stores set. All nodeswill use server ‘cdr1’ as the master. cdr check replset -s stores -m cdr1 -a ------ Statistics for stores_serv1_1_6_state ------ Node Rows Extra Missing Mismatch Processed ---------------- --------- --------- --------- --------- --------- cdr1 52 0 0 0 0 cdr4 52 0 0 0 0 cdr3 52 0 0 0 0 cdr2 52 0 0 0 0 ------ Statistics for stores_serv1_1_9_msgs ------ Node Rows Extra Missing Mismatch Processed ---------------- --------- --------- --------- --------- --------- cdr1 12 0 0 0 0 cdr4 12 0 0 0 0 … N.B. Since replication wascreated by using a templatecalled stores, then we havea stores replicate set. ------ Statistics for stores_serv1_1_1_customer ------ Node Rows Extra Missing Mismatch Processed ---------------- --------- --------- --------- --------- --------- cdr1 27 0 0 0 0 cdr4 28 1 0 1 0 cdr3 28 1 0 0 0 cdr2 27 1 1 0 0 WARNING: replicate is not in sync Found the rows

  35. Performing check with repair Run the check on all nodesusing the stores set. All nodeswill use server ‘cdr1’ as the master. cdr check replset -R -s stores -m cdr1 -a ------ Statistics for stores_serv1_1_6_state ------ Node Rows Extra Missing Mismatch Processed ---------------- --------- --------- --------- --------- --------- cdr1 52 0 0 0 0 cdr4 52 0 0 0 0 cdr3 52 0 0 0 0 cdr2 52 0 0 0 0 ------ Statistics for stores_serv1_1_9_msgs ------ Node Rows Extra Missing Mismatch Processed ---------------- --------- --------- --------- --------- --------- cdr1 12 0 0 0 0 cdr4 12 0 0 0 0 … ------ Statistics for stores_serv1_1_1_customer ------ Node Rows Extra Missing Mismatch Processed ---------------- --------- --------- --------- --------- --------- cdr1 27 0 0 0 2 cdr4 28 1 0 1 1 cdr3 28 1 0 0 1 cdr2 27 1 1 0 1 Deleted Extra Rows

  36. Extra rows on the target Sync and check can process extra rows on the targets through --extratargetrows=<rule> (-e) option delete – delete any extra rows found on the targets (default) merge – replicate any extra rows on the targets to all other participants keep – simply keep the extra rows on the targets If the extra row rule is delete, then sync/check will delete the targetrow using cascading deletes If the merge rule is being used, then the row will be replicated from the target node

  37. Sync and referential integrity Processing of the members of a replicate set uses the referential integrity rules on the source node to determine the order of resync Parent tables are resynced prior to the child table Deletes on the target are done using cascading deletes

  38. cdr sync Similar to cdr check, but does not use a checksum algorithm to detect inconsistencies All rows are transferred to the targets Can be performed on a replicate or a replicate set Can support synchronization to multiple targets Invoked via the ‘cdr sync’ command, or through ‘cdr start’ and ‘cdr realize template’ Does not require that the nodes be idle Invoked internally as part of start with sync functionality

  39. Demo 4 Time

  40. Questions? Speaker Name mpruet@us.ibm.com Informix User Forum 2005Moving Forward With Informix Atlanta, Georgia December 8-9, 2005

More Related