Incrementally updating transportable tablespaces using rman

17-Jul-2017 20:43 by 2 Comments

Incrementally updating transportable tablespaces using rman

Dataguard was considered and there are limitations as well.Data Guard Support for Heterogeneous Primary and Logical Standbys in Same Data Guard Configuration [ID 1085687.1] Data Guard Support for Heterogeneous Primary and Physical Standbys in Same Data Guard Configuration [ID 413484.1] For the test case, endian conversion was performed at source using convert tablespace because it’s much simpler with 3 tablespaces and 45 data files.

incrementally updating transportable tablespaces using rman-22incrementally updating transportable tablespaces using rman-62incrementally updating transportable tablespaces using rman-18

RMAN CONVERT tablespace total 704GB for all data files and total USED tablespace is 692GB.

Migrate multiple database systems, across multiples sites, all to new hardware (with new endian format), new database version (…and of course, try to do it with as small a disruption to the service as possible.

We are not a Goldengate customer, so pursuing a zero-downtime migration was not considered, and to be honest, even we had such facilities at our disposal, I think we would still have taken a small outage just to allow a “quiet time” to do verification checks etc.

I remember countless SQL and shell scripts, all designed to do as much work concurrently as possible (multiple physical networks, multiple concurrent jobs pulling tables over db links, concurrent index builds, etc etc etc), all designed to keep that outage time to a minimum. In contrast, using the cross-platform transportable tablespaces, this current migration ran very smoothly indeed.

Below is a rough guide as to what is needed to perform such a migration – I didnt want to list precisely our steps, because people will mistakenly treat that as the definitive prescription for how to do it.

The platform name can be determined using the SQL: configure device type disk parallelism 6 backup type to backupset; configure archivelog deletion policy to none; convert tablespace tbs1,tbs2,tbs3 to platform="Solaris Operating System (x86-64)" db_file_name_convert '/oracle/oradata/source','/oracle/oradata/target'; Caveat – TRANSPORT_TABLESPACES do not contain: db_link,function,package,procedure,sequence,synonym,view, to name a few.

Summary of results: XTTS took 593 minutes and Data Pump took 663 minutes. Is it necessary to enable REF_CONSTRAINT if the tablesspace is in READ ONLY or the export is consistent???I had done a similar exercise many years ago, with a client moving from 9i to 10g.In those days, migration with low downtime was a massively complex affair.There’s a requirement for cross platform database migration from Solaris Operating System (SPARC) (64-bit) to Solaris Operating System (x86-64) and ideas started flying around.CONVERT DATABASE cannot be used because of endian change.Thus I can keep my target system up to date with my source system, without taking my source system offline.