1 / 47

Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12 c Databases At Warp Speed

Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12 c Databases At Warp Speed. Jim Czuprynski Zero Defect Computing, Inc. Session #UGF3500. My Credentials. 30+ years of database-centric IT experience Oracle DBA since 2001 Oracle 9i, 10g, 11g OCP and Oracle ACE Director

phila
Télécharger la présentation

Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12 c Databases At Warp Speed

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12c Databases At Warp Speed Jim Czuprynski Zero Defect Computing, Inc. Session #UGF3500

  2. My Credentials 30+ years of database-centric IT experience Oracle DBA since 2001 Oracle 9i, 10g, 11g OCP and Oracle ACE Director > 100 articles on databasejournal.com and ioug.org Teach core Oracle DBA courses (Grid + RAC, Exadata, Performance Tuning, Data Guard) Regular speaker at Oracle OpenWorld, IOUG COLLABORATE, OUG Norway, and Hotsos Oracle-centric blog (Generally, It Depends)

  3. Upgrading to 12c: What’s the Rush? • 12c Release 1 … is actually 12c Release 2 • 12.1.0.2 offers significant enhancements • PDB Enhancements • Big Table and Full Database Caching • In-Memory Aggregation • In-Memory Column Store • Support for 11gR2 expires as of 12-2015 • 11gR2 Database issue resolution costs will escalate dramatically New!

  4. Our Agenda • Fresher on Multi-Tenancy Databases: CDBs, PDBs, and PDB Migration Methods • Cloning a New PDB from “Scratch” • Cloning New PDB from Existing PDBs • “Replugging” Existing PDBs • Migrating a Non-CDB to a PDB • RMAN Enhancements • Q+A

  5. Multi-Tenancy: CDBs and PDBs The next database release offers a completely new multi-tenancy architecture for databases and instances: • A Container Database (CDBs) comprises one or more Pluggable Databases (PDBs) • CDBs are databases that contain common elements shared with PDBs • PDBs comparable to traditional databases in prior releases … • …but PDBs offer extreme flexibility for cloning, upgrading, and application workload localization

  6. CDBs and Common Objects CDBs and PDBs share common objects CDB1 • A CDB owns in common: • Control files and SPFILE • Online and archived redo logs • Backup sets and image copies • Each CDB has one SYSTEM, SYSAUX, UNDO, and TEMP tablespace • Oracle-supplied data dictionary objects, users, and roles are shared globally between CDB and all PDBs CDB$ROOT Data Dictionary PDB3 PDB1 Roles PDB2 Users SPFILE SYSAUX SYSTEM UNDOTBS1 Control Files TEMP ORLs ARLs Backups Image Copies

  7. PDBs and Local Objects CDB1 PDBs also own local objects • PDBs have a local SYSTEM and SYSAUX tablespace • PDBs may have their own local TEMP tablespace • PDBs can own one or more application schemas: • Local tablespaces • Local users and roles • PDBs own all application objects within their schemas By default, PDBs can only see their own objects AP MFG HR MFG_ROLE AP_ROLE HR_ROLE PDB3 PDB1 PDB2 SYSTEM SYSTEM SYSTEM SYSAUX SYSAUX SYSAUX TEMP TEMP TEMP MFG_DATA AP_DATA HR_DATA

  8. Shared Memory and Processes DW + DSS BATCH + IDL CDBs and PDBs also share common memory and background processes • All PDBs share same SGA and PGA • All PDBs share same background processes • OLTP: Intense random reads and writes (DBWn and LGWR) • DW/DSS: Intense sequential reads and/or logicaI I/O • Batch and Data Loading: Intense sequential physical reads and physical writes OLTP CDB1 SGA & PGA Others LGWR DBWn PDB3 PDB1 PDB2 System Storage

  9. Sharing: It’s a Good Thing! Sharing common resources - when it makes sense - tends to reduce contention as well as needless resource over-allocation: • Not all PDBs demand high CPU cycles • Not all PDBs have same memory demands • Not all PDBs have same I/O bandwidth needs • DSS/DW: MBPS • OLTP: IOPS and Latency Result: More instances with less hardware

  10. PDBs: Ultra-Fast Provisioning Four ways to provision PDBs: • Clone from PDB$SEED • Clone from existing PDB • “Replugging” previously “unplugged” PDB • Plug in non-CDB as new PDB All PDBs already plugged into CDB stay alive during these operations! CDB1 PDB$SEED PDB1 PDB3 PDB5 11gR2DB PDB2 PDB4

  11. Cloning From PDB$SEED

  12. Prerequisites to Oracle 12cR1 PDB Cloning • A valid Container Database (CDB) must already exist • The CDB must permit pluggable databases • Sufficient space for new PDB’s database files must exist

  13. Defining PDB Database File Destinations • Declare the new PDB’s destination directory: CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_adm IDENTIFIED BY “P@5$w0rD” ROLES=(CONNECT) FILE_NAME_CONVERT = (‘/u01/app/oracle/oradata/pdbseed’, ‘/u01/app/oracle/oradata/dev_ap’);

  14. Cloning From the PDB Seed Database 1 CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_adm IDENTIFIED BY “P@5$w0rD” ROLES=(CONNECT); A new PDB is cloned in a matter of seconds From CDB1 instance’s alert log: CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_admin IDENTIFIED BY * ROLES=(CONNECT) Tue Apr 08 16:50:18 2014 **************************************************************** Pluggable Database DEV_AP with pdb id - 4 is created as UNUSABLE. If any errors are encountered before the pdb is marked as NEW, then the pdb must be dropped **************************************************************** Deleting old file#5 from file$ Deleting old file#7 from file$ Adding new file#45 to file$(old file#5) Adding new file#46 to file$(old file#7) Successfully created internal service dev_ap at open ALTER SYSTEM: Flushing buffer cache inst=0 container=4 local **************************************************************** Post plug operations are now complete. Pluggable database DEV_AP with pdb id - 4 is now marked as NEW. **************************************************************** Completed: CREATE PLUGGABLE DATABASE dev_ap ADMIN USER dev_ap_admin IDENTIFIED BY * ROLES=(CONNECT)

  15. Completing PDB Cloning Operations Once the PDB$SEED tablespaces are cloned, the new PDB must be opened in READ WRITE mode: 2 SQL> ALTER PLUGGABLE DATABASE dev_ap OPEN; From CDB1 instance’s alert log: alter pluggable database dev_ap open Tue Apr 08 16:51:39 2014 Pluggable database DEV_AP dictionary check beginning Pluggable Database DEV_AP Dictionary check complete Due to limited space in shared pool (need 6094848 bytes, have 3981120 bytes), limiting Resource Manager entities from 2048 to 32 Opening pdb DEV_AP (4) with no Resource Manager plan active Tue Apr 08 16:51:56 2014 XDB installed. XDB initialized. Pluggable database DEV_AP opened read write Completed: alter pluggable database dev_ap open

  16. Cloning from Existing PDBs

  17. Cloning a New PDB From Another PDB 1 *.DB_CREATE_FILE_DEST = ‘/u01/app/oracle/oradata/qa_ap’ … or … *.PDB_FILE_NAME_CONVERT_DEST = ‘/u01/app/oracle/oradata/qa_ap’ Declare the new PDB’s destination directory: 2 Connect as CDB$ROOT and quiesce the source PDB in READ ONLY mode: SQL> CONNECT / AS SYSDBA; SQL> ALTER PLUGGABLE DATABASE prod_ap CLOSE IMMEDIATE; SQL> ALTER PLUGGABLE DATABASE prod_ap READ ONLY; 3 SQL> CREATE PLUGGABLE DATABASE qa_ap FROM prod_ap; Clone the target PDB:

  18. Cloning From the PDB Seed Database From CDB1 instance’s alert log: Mon Mar 31 08:02:52 2014 ALTER SYSTEM: Flushing buffer cache inst=0 container=3 local Pluggable database PROD_AP closed Completed: ALTER PLUGGABLE DATABASE prod_ap CLOSE IMMEDIATE ALTER PLUGGABLE DATABASE prod_ap OPEN READ ONLY Mon Mar 31 08:03:03 2014 Due to limited space in shared pool (need 6094848 bytes, have 3981120 bytes), limiting Resource Manager entities from 2048 to 32 Opening pdb PROD_AP (3) with no Resource Manager plan active Pluggable database PROD_AP opened read only Completed: ALTER PLUGGABLE DATABASE prod_ap OPEN READ ONLY CREATE PLUGGABLE DATABASE qa_ap FROM prod_ap Mon Mar 31 08:06:16 2014 **************************************************************** Pluggable Database QA_AP with pdb id - 4 is created as UNUSABLE. If any errors are encountered before the pdb is marked as NEW, then the pdb must be dropped **************************************************************** Deleting old file#8 from file$ Deleting old file#9 from file$ Deleting old file#10 from file$ . . . Deleting old file#21 from file$ Deleting old file#22 from file$ << continued >> Adding new file#23 to file$(old file#8) Adding new file#24 to file$(old file#9) . . . Adding new file#28 to file$(old file#14) Marking tablespace #7 invalid since it is not present in the describe file Marking tablespace #8 invalid since it is not present in the describe file Marking tablespace #9 invalid since it is not present in the describe file . . . Marking tablespace #12 invalid since it is not present in the describe file Marking tablespace #13 invalid since it is not present in the describe file Marking tablespace #14 invalid since it is not present in the describe file Successfully created internal service qa_ap at open ALTER SYSTEM: Flushing buffer cache inst=0 container=4 local **************************************************************** Post plug operations are now complete. Pluggable database QA_AP with pdb id - 4 is now marked as NEW. **************************************************************** Completed: CREATE PLUGGABLE DATABASE qa_ap FROM prod_ap 2 3

  19. Completing PDB Cloning Operations Once the QA_AP database has been cloned, it must be opened in READ WRITE mode: 4 SQL> ALTER PLUGGABLE DATABASE qa_ap OPEN; From CDB1 instance’s alert log: . . . alter pluggable database qa_ap open Mon Mar 31 08:11:47 2014 Pluggable database QA_AP dictionary check beginning Pluggable Database QA_AP Dictionary check complete Due to limited space in shared pool (need 6094848 bytes, have 3981120 bytes), limiting Resource Manager entities from 2048 to 32 Opening pdb QA_AP (4) with no Resource Manager plan active Mon Mar 31 08:11:59 2014 XDB installed. XDB initialized. Pluggable database QA_AP opened read write Completed: alter pluggable database qa_ap open 4

  20. “Replugging” an Existing PDB

  21. “Unplugging” An Existing PDB 1 Connect as CDB$ROOT on CDB1, then shut down the source PDB: SQL> CONNECT / AS SYSDBA; SQL> ALTER PLUGGABLE DATABASE qa_ap CLOSE IMMEDIATE; 2 “Unplug” the existing PDB from its current CDB: SQL> ALTER PLUGGABLE DATABASE qa_ap UNPLUG INTO ‘/home/oracle/qa_ap.xml’; 3 Drop the unplugged PDB from its current CDB: SQL> DROP PLUGGABLE DATABASE qa_ap;

  22. “Replugging” An Existing PDB 1 Connect as CDB$ROOT at CDB2: SQL> CONNECT / AS SYSDBA; “Replug” the existing PDB into its new CDB SQL> CREATE PLUGGABLE DATABASE qa_ap USING ‘/home/oracle/qa_ap.xml’ NOCOPY; 2 SET SERVEROUTPUT ON DECLARE compatBOOLEAN; BEGIN compat:= DBMS_PDB.CHECK_PLUG_COMPATIBILITY( pdb_descr_file => '/home/oracle/qa_ap.xml', ,pdb_name=> ‘qa_ap'); DBMS_OUTPUT.PUT_LINE('PDB compatible? ‘ || compat); END; / Check PDB’s compatibility with CDB2: 3 4 Open the replugged PDB in READ WRITE mode SQL> ALTER PLUGGABLE DATABASEqa_ap READ WRITE;

  23. Upgrating To 12cR1: Plugging In Non-CDB As PDB

  24. Upgrating* a Non-CDB To a PDB A pre-12cR1 database can be upgrated* to a 12cR1 PDB • Either … • Upgrade the source database to 12cR1 non-CDB • Plug upgraded non-CDB into existing CDB as new PDB • … or: • Clone new empty PDB into existing CDB from PDB$SEED • Migrate data from source database to newly-cloned PDB *WARNING: As a member of POEM, I am qualified to make up words. For your own safety, please do not try this without a certified POEM member present; poor grammar and misspelling may result.

  25. Migrating Data From Previous Releases Depending on application downtime requirements: • Oracle GoldenGate • Low downtime • Separate (perhaps expensive) licensing required • Cross-Platform Transportable Tablespaces • 12cR1 integrates TTS operations with DataPump metadata migration automatically • DataPump Full Tablespace Export (FTE) • 12cR1 integrates DataPump full export metadata migration with TTS automatically

  26. Cross-Platform Transport (CPT) 1 RMAN> BACKUP FOR TRANSPORT FORMAT ‘+DATA’ DATAPUMP FORMAT ‘/home/oracle/dp_fte.dmp’ TABLESPACE ap_data, ap_idx; • ARCHIVELOG ON • COMPATIBLE >= 12.0 • OPEN READ WRITE Back up READ ONLY tablespaces as backup set from 12.1.0.1 Ora12101 +DATA +FRA 2 Copy datafile backup sets to 12.1.0.2 database DBMS_FILE_TRANSFER 3 RMAN> RESTORE FOREIGN TABLESPACE ap_data, ap_idx FORMAT ‘+DATA’ FROM BACKUPSET‘+FRA’ DUMP FILE FROM BACKUPSET ‘/home/oracle/dp_fte.dmp’; • ARCHIVELOG ON • COMPATIBLE = 12.0 • OPEN READ WRITE Restore tablespaces into 12.1.0.2 non-CDB or PDB Ora12102 +DATA +FRA

  27. Full Transportable Export/Import (FTE) 1 • ARCHIVELOG ON • COMPATIBLE >= 11.2.0.3 • OPEN READ WRITE $> expdp system/****** parfile=fte_11203.dpectl Export contents of entire 11.2.0.3 database Ora11203 DUMPFILE=fte_ora11g.dmp LOGFILE=fte_ora11g.log TRANSPORTABLE=ALWAYS VERSION=12.0 FULL=Y +DATA +FRA 2 Copy datafiles & metadata dump set to 12.1.0.1 database DBMS_FILE_TRANSFER 3 $> impdp system/****** parfile=fti_11203.dpctl • ARCHIVELOG ON • COMPATIBLE = 12.0 • OPEN READ WRITE Plug non-system datafiles & all objects into 12.1.0.1 database Ora12010 DIRECTORY=DPDIR DUMPFILE=fte_ora11g.dmp LOGFILE=fti_prod_api.log FULL=Y TRANSPORT_DATAFILES= … +DATA +FRA

  28. New PDB Featuresin Release 12.1.0.2

  29. New! PDBs: Accessibility & Management • CONTAINERS Clause • Queries can be executed in a CDB across identically-named objects in different PDBs • OMF File Placement • CREATE_FILE_DEST controls default location of all new files in PDB (really useful for shared storage!) • Improved State Management on CDB Restart • SAVE STATE: PDB automatically reopened • DISCARD STATE: PDB left in default state (MOUNT) • LOGGING Clause • Controls whether any future tablespaces are created in LOGGING or NOLOGGING mode

  30. New! PDBs: Cloning Enhancements • Subset Cloning • USER_TABLESPACES clause captures only the desired tablespaces during PDB cloning from non-CDB or PDB • Metadata-Only Cloning • NO DATAclause captures only the data dictionary definitions – but not the application data • Remote Cloning • Allows cloning of a PDB or non-CDB on a remote server via a database link • Cloning from 3rd Party Snapshots • SNAPSHOT COPYclause enables cloning a PDB directly from snapshot copies stored on supported file systems (e.g. ACFS or Direct NFS)

  31. Uber-Fast Database Recovery:12cR1 Recovery Manager (RMAN)Enhancements

  32. Backup, Restore, and Recover Non-CDBs, CDBs, and PDBs • Image copy backups now support multi-section, multi-channel BACKUP operations • SECTION SIZE directive fully supported • Faster image copy file creation • What is backed up in multiple section(s) … • …can be restored in multi-channel fashion more quickly • Backups for TTS can now be taken with tablespace set open in READ WRITE mode

  33. BACKUP AS COPY … SECTION SIZE RMAN> # Back up just one tablespace set with SECTION SIZE BACKUP AS COPY SECTION SIZE 100M TABLESPACE ap_data, ap_idx; Starting backup at 2014-04-07 13:11:33 using channel ORA_DISK_1 using channel ORA_DISK_2 using channel ORA_DISK_3 using channel ORA_DISK_4 channel ORA_DISK_1: starting datafile copy input datafile file number=00014 name=+DATA/NCDB121/DATAFILE/ap_data.289.836526779 backing up blocks 1 through 12800 channel ORA_DISK_2: starting datafile copy input datafile file number=00015 name=+DATA/NCDB121/DATAFILE/ap_idx.290.836526787 backing up blocks 1 through 12800 channel ORA_DISK_3: starting datafile copy input datafile file number=00014 name=+DATA/NCDB121/DATAFILE/ap_data.289.836526779 backing up blocks 12801 through 25600 channel ORA_DISK_4: starting datafile copy input datafile file number=00014 name=+DATA/NCDB121/DATAFILE/ap_data.289.836526779 backing up blocks 25601 through 38400 output file name=+FRA/NCDB121/DATAFILE/ap_data.270.844261895 tag=TAG20140407T131133 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:04 << continued >> channel ORA_DISK_1: starting datafile copy input datafile file number=00015 name=+DATA/NCDB121/DATAFILE/ap_idx.290.836526787 backing up blocks 12801 through 25600 output file name=+FRA/NCDB121/DATAFILE/ap_idx.298.844261895 tag=TAG20140407T131133 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:05 channel ORA_DISK_2: starting datafile copy input datafile file number=00015 name=+DATA/NCDB121/DATAFILE/ap_idx.290.836526787 backing up blocks 25601 through 38400 output file name=+FRA/NCDB121/DATAFILE/ap_data.270.844261895 tag=TAG20140407T131133 channel ORA_DISK_3: datafile copy complete, elapsed time: 00:00:05 output file name=+FRA/NCDB121/DATAFILE/ap_data.270.844261895 tag=TAG20140407T131133 channel ORA_DISK_4: datafile copy complete, elapsed time: 00:00:06 output file name=+FRA/NCDB121/DATAFILE/ap_idx.298.844261895 tag=TAG20140407T131133 channel ORA_DISK_1: datafile copy complete, elapsed time: 00:00:05 output file name=+FRA/NCDB121/DATAFILE/ap_idx.298.844261895 tag=TAG20140407T131133 channel ORA_DISK_2: datafile copy complete, elapsed time: 00:00:04 Finished backup at 2014-04-07 13:11:44 << or about 11 seconds total time!

  34. RMAN: Table-Level Recovery • Oracle DBA decides to roll back table AP.VENDORS to prior point in time (e.g. 24 hours ago), but: • FLASHBACK VERSIONS Query, FLASHBACK Query, or FLASHBACK TABLE can’t rewind far enough because UNDOTBS is exhausted • FLASHBACK … TO BEFORE DROP impossible • FLASHBACK DATABASE impractical • ARCHIVELOG ON • COMPATIBLE = 12.0 • OPEN READ WRITE +DATA +FRA RMAN> RECOVER TABLEap.vendors UNTIL TIME ‘SYSDATE – 1/24’ AUXILIARY DESTINATION ‘+AUX’; +AUX • Appropriate RMAN backup files located on +FRA • RMAN creates auxiliary destination on +AUX • Tablespace(s) for AP.VENDORS restored + recovered to prior TSPITR • DataPump exports table (AP.VENDORS) into dump set in +AUX • DataPump imports recovered data back into +DATA from +AUX

  35. Customizing Table-Level Recovery Table-level recovery is customizable: • NOTABLEIMPORT tells RMAN to stop before recovered objects are imported into the target database • REMAP TABLE renames recovered tables and table partitions during IMPORT • REMAP TABLESPACE permits remapping of table partitions into different tablespaces

  36. In-Memory Column Store:A Revolution in SQL Query Performance

  37. New! In-Memory Column Store • A new way to store data in addition to DBC • Can be enabled for specific tablespaces, tables, and materialized views • Significant performance improvement for queries that: • Filter a large number of rows (=, <, >, IN) • Select a small number of columns from a table with many columns • Join a small table to a larger table • Perform aggregation (SUM, MAX, MIN, COUNT) • Eliminates need for multi-column indexes

  38. In-Memory Column Store: Setup 1 SQL> ALTER SYSTEM SET inmemory_size= 128M SCOPE=SPFILE; SQL> SHUTDOWN IMMEDIATE; SQL> STARTUP; Allocate memory for the In-Memory Column Store, then bounce the database instance: 2 SQL> CONNECT / AS SYSDBA; SQL> ALTER TABLE ap.randomized_sorted INMEMORY MEMCOMPRESS FOR QUERY HIGH PRIORITY HIGH; Add the desired table to the In-Memory Column Store:

  39. In-Memory Column Store: Results SQL> ALTER SYSTEM SET inmemory_query = DISABLE; System altered. SQL> EXPLAIN PLAN FOR 2 SELECT key_sts, COUNT(*) 3 FROM ap.randomized_sorted 4 WHERE key_sts IN(10,20,30) 5 GROUP BY key_sts 6 ; Plan hash value: 1010500208 ---------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ---------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 9 | 7420 (1)| 00:00:01 | | 1 | HASH GROUP BY | | 3 | 9 | 7420 (1)| 00:00:01 | |* 2 | TABLE ACCESS FULL| RANDOMIZED_SORTED | 300K| 879K| 7413 (1)| 00:00:01 | ---------------------------------------------------------------------------------------- 2 - filter("KEY_STS"=10 OR "KEY_STS"=20 OR "KEY_STS"=30) SQL> SELECT key_sts, COUNT(*) 2 FROM ap.randomized_sorted 3 WHERE key_sts IN(10,20,30) 4 GROUP BY key_sts 5 ; KEY_STS COUNT(*) ---------- ---------- 30 149260 20 100778 10 50151 Elapsed: 00:00:00.89 SQL> ALTER SYSTEM SET inmemory_query = ENABLE; System altered. SQL> EXPLAIN PLAN FOR 2 SELECT key_sts, COUNT(*) 3 FROM ap.randomized_sorted 4 WHERE key_sts IN(10,20,30) 5 GROUP BY key_sts 6 ; Plan hash value: 1010500208 ------------------------------------------------------------------------------------------------- | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 3 | 9 | 319 (14)| 00:00:01 | | 1 | HASH GROUP BY | | 3 | 9 | 319 (14)| 00:00:01 | |* 2 | TABLE ACCESS INMEMORY FULL| RANDOMIZED_SORTED | 300K| 879K| 312 (12)| 00:00:01 | ------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 2 - inmemory("KEY_STS"=10 OR "KEY_STS"=20 OR "KEY_STS"=30) filter("KEY_STS"=10 OR "KEY_STS"=20 OR "KEY_STS"=30) SELECT key_sts, COUNT(*) FROM ap.randomized_sorted WHERE key_sts IN(10,20,30) GROUP BY key_sts ; KEY_STS COUNT(*) ---------- ---------- 30 149260 20 100778 10 50151 Elapsed: 00:00:00.07 … a 12.7Ximprovement!

  40. Over To You …

  41. Thank You For Your Kind Attention Please feel free to evaluate this session: • Session #UGF3500 • Blink and You’ll Miss It: Migrating, Cloning and Recovering Oracle 12c Databases At Warp Speed • If you have any questions or comments, feel free to: • E-mail me at jczuprynski@zerodefectcomputing.com • Follow my blog (Generally, It Depends): • http://jimczuprynski.wordpress.com • Connect with me on LinkedIn (Jim Czuprynski) • Follow me on Twitter (@jczuprynski)

  42. Coming Soon … • Coming in May 2015 from Oracle Press: • Oracle Database Upgrade, Migration & Transformation Tips & Techniques • Covers everything you need to know to upgrade, migrate, and transform any Oracle 10g or 11g database to Oracle 12c • Discusses strategy and tactics of planning Oracle migration, transformation, and upgrade projects • Explores latest transformation features: • Recovery Manager (RMAN) • Oracle GoldenGate • Cross-Platform Transportable Tablespaces • Cross-Platform Transport (CPT) • Full Transportable Export (FTE) • Includes detailed sample code

  43. Visit IOUG at the User Group Pavilion • Stop by at the User Group Pavilion in the lobby of Moscone South and catch up with the user community!  • Connect with IOUG members and volunteers • Pick up a discount to join the IOUG community of 20,000+ technologists strong • Enter for the chance to win books from IOUG Press or a free registration to COLLABORATE 15! • Visit us Sunday through Wednesday!

  44. IOUG SIG Meetings at OpenWorld All meetings located in Moscone South - Room 208 Sunday, September 28Cloud Computing SIG: 1:30 p.m. - 2:30 p.m. Monday, September 29Exadata SIG: 2:00 p.m. - 3:00 p.m.BIWA SIG: 5:00 p.m. – 6:00 p.m. Tuesday, September 30Internet of Things SIG: 11:00 a.m. - 12:00 p.m.Storage SIG: 4:00 p.m. - 5:00 p.m.SPARC/Solaris SIG: 5:00 p.m. - 6:00 p.m. Wednesday, October 1Oracle Enterprise Manager SIG: 8:00 a.m. - 9:00 a.m.Big Data SIG: 10:30 a.m. - 11:30 a.m. Oracle 12c SIG: 2:00 p.m. – 3:00 p.m.Oracle Spatial and Graph SIG: 4:00 p.m. (*OTN lounge)

  45. COLLABORATE 15 – IOUG Forum April 12-16, 2015 Mandalay Bay Resort and Casino Las Vegas, NV The IOUG Forum Advantage • Save more than $1,000 on education offerings like pre-conference workshops • Access the brand-new, specialized IOUG Strategic Leadership Program • Priority access to the hands-on labs with Oracle ACE support • Advance access to supplemental session material and presentations • Special IOUG activities with no "ante in" needed - evening networking opportunities and more www.collaborate.ioug.org Follow us on Twitter at @IOUG or via the conference hashtag #C15LV! COLLABORATE 15 Call for SpeakersEnds October 10

  46. Did you know thatIOUG Members get up to 60% off of IOUG Press eBooks? JOIN or RENEW By MuraliVallath Releasing: Sept. 30 Expert Oracle Database Architecture (3rd Edition) By Thomas Kyte, DarlKuhn Releasing: Oct. 22 Oracle Enterprise Manager 12c Command-Line Interface By KellynPot'vin, Seth Miller, Ray Smith Releasing: Oct. 15

  47. Connect with IOUG Twitter: @IOUGor follow hash tag: #IOUG Facebook: IOUG’s official FacebookFan Page: www.ioug.org/facebook LinkedIn: Connect and network with other Oracle professionals and experts in the IOUG Community LinkedIn group. www.ioug.org/linkedin

More Related