Customer was originally a stand-alone version of Oracle 18.104.22.168.0 Database for Windows database system, customer demand is the standalone version of Oracle Database to migrate to Oracle RAC Database 3 nodes, and the platform into Linux, but the database version It does not change. Customers want the shortest possible downtime.
Usually for later versions of Oracle Database does not cross the migration commonly used DG (Data Guard), once configured DG rapid switchover to complete the migration of the database, then the follow-up work to modify the IP address.
Although this case need cross-platform, but from the beginning of Oracle Database 11g supports a limited cross-platform configuration DG, Below is the list of heterogeneous platform compatibility of DG (physical Standby)
Compatibility list can be seen, Microsoft Windows (x86_64) (12) and Linux x86_64 (13) compatible build DG heterogeneous platforms, but it should be noted that Oracle Database needs to be 11g, and the need to apply Patch 13104881, only the Patch present in the Linux platform, and only Linux (library equipment) when synchronization is required in application Windows (main library) to the Linux platform, this patch (the reverse will not encounter the appropriate bug), for more information see patch 13104881 explanation.
About the heterogeneous platform to build ADG details, please refer to the article: "In the physical Data Guard support for heterogeneous standby system (Doc ID 1602437.1)"
After confirming the DG can build between existing platforms and new platforms, the remaining question is how to convert Single Database RAC Database, this step after the actual need to convert the standby database Linux platform into the main library, the following It describes the steps to migrate the entire general:
1. Install Oracle Grid Infrastructure cluster member in the new environment.
2. Install Oracle RAC Database software in the new environment.
3. Create ASM disk group, configure the listener.
4. Create the first node of the RAC Windows to Linux ADG (using the first node of the VIP address configured real-time synchronization mode), and directly control files, data files, log files stored in the shared ASM disk group.
<<<< Because Windows is configured to Linux DG, certainly involves setting DB_FILE_NAME_CONVERT and LOG_FILE_NAME_CONVERT parameters, these two parameters need to restart the database instance to take effect, we need to note that, duplicate the operation from the main library to the backup library for catalog conversion is actually prepared by a library setting to control these two parameters, so in the early stages of the configuration may not be the main library set these two parameters, which means that even in cross-platform environments to be the main library DG complete configuration without rebooting (except these two parameters outside of most of the other parameters do not need to restart the DG can take effect).
<<<< LOG_FILE_NAME_CONVERT parameter value must include all online Redo logs, standby Redo log directory conversion. DB_FILE_NAME_CONVERT value parameter must contain data files and temporary files directory of all conversions. E.g:
LOG_FILE_NAME_CONVERT = '+ DATA01 / dbm / onlinelog /', '+ DATA_DM01 / dbm / onlinelog /', '+ FRA01 / dbm / onlinelog /', '+ DBFS_DG / dbm / onlinelog /'
DB_FILE_NAME_CONVERT = '+ DATA01 / dbm / datafile /', '+ DATA_DM01 / dbm / datafile /', '+ DATA01 / dbm / tempfile /', '+ DATA_DM01 / dbm / tempfile /'
The last value LOG_FILE_NAME_CONVERT and DB_FILE_NAME_CONVERT two parameters should be added / slash.
For DB_FILE_NAME_CONVERT set of parameters in addition to taking into account the data file, but also need to take into account the location of temporary files, particularly ASM time, OMF management settings data file datafile directory, the temporary file tempfile, you can not specify only the name of the disk group must want to assign to a specific absolute path.
<<<< Also note the two configuration parameter values, assuming A Library (main library) and B library (library equipment) should be = 'B library location' value A library configuration of these two parameters in , 'a repository location' in the value of these two parameters B library configuration should = 'a repository location', 'B library location', must not engage in anti.
<<<< If after DG configuration, log transport service is not working properly, you can consider implementing alter system set log_archive_dest_state_2 = defer; alter system set log_archive_dest_state_2 = enable; ways to disable and then enable the remote directory transfers.
<<<< Do single moderator library to the backup library of DG, the best the ORACLE_SID environment variable, instance_name and db_unique_name configured to the same value, then further adjustments.
5. Preparation of the library into the main library (can do Switchover operation, can be ensured under the premise of real-time synchronization of the standby database directly off the main library, the library will be prepared to activate become readable and writable databases).
6. Perform the following steps to stand-alone version of the database into a RAC database:
4) Take a backup of original single-instance pfile to e.g. /tmp/initorcl.ora and Add the following entry in pfile, e.g. for a two node RAC cluster
* .cluster_database = TRUE
* .cluster_database_instances = 2
* .undo_management = AUTO
.undo_tablespace = undotbs (undo tablespace which already exists)
.instance_number = 1
.thread = 1
.local_listener = _
.instance_number = 2
.local_listener = _
.thread = 2
.undo_tablespace = UNDOTBS2
.cluster_database = TRUE
.cluster_database_instances = 2
is equal to "1". is equal to "2", e.g. ORCL1, ORCL2.
5) change the location of control file in parameter file
local drive to shared cluster file system location
ie control_files = '/ control01.ctl'
to ie control_files = '/ control01.ctl'
6) create spfile from pfile (spfile should be stored in shared device)
export ORACLE_SID = ORCL1
sqlplus "/ as sysdba"
create spfile = '/ spfileORCL.ora' from pfile = '/ tmp / initORCL.ora';
7) Create the $ ORACLE_HOME / dbs / init.ora e.g. initORCL1.ora file that contains the following entry
spfile = 'spfile_path_name'
spfile_path_name is the complete path name of the SPFILE.
spfile = '/ cfs / spfile / spfileORCL1.ora'
8) create new password file for ORCL1 instance.
orapwd file = orapwORCL1 password = oracle
9) start the database in mount stage
10) Rename the datafile, redo logs to new shared device
alter database rename file '' to '11) Add second instance redo logs (or more when multiple instances will be started)
add logfile thread 2
group 3 ( 'group 4 (' alter database enable public thread 2;
12) create the second (or more) instance undo tablespace from existing instance
Path and file name will different for your environment
CREATE UNDO TABLESPACE UNDOTBS2 DATAFILE
'/dev/RAC/undotbs_02_210.dbf' SIZE 200M;
13) Open your database (i.e. alter database open;) and run $ ORACLE_HOME / rdbms / admin / catclust.sql to create cluster database specific views within the existing instance
2. On the second node and other nodes
14) Set ORACLE_SID and ORACLE_HOME environment variables on the second node
15) Create the $ ORACLE_HOME / dbs / init.ora e.g. initORCL2.ora file for the second node the same way as with point 7.
16) create new password file for second instance ORCL2 instance as in point 8
orapwd file = orapwORCL2 password = oracle
17) Start the second Instance
3. on one of the nodes
18) After configuring the listener, you have to add the database in cluster as below
srvctl add database -d -o -p
srvctl add instance -d -i -n
srvctl add instance -d -i -n
19) in case ASM is used, add the rdbms instance / asm dependency, e.g.
srvctl modify instance -d -i -s <+ ASM1>
Single to RAC entire conversion process is not a long time in advance, and are ready to test initialization parameter file, script execution will further reduce this part of the time.
More Single Database convert to RAC Database content refer to the article "How to Convert 10g Single-Instance database to 10g RAC using Manual Conversion procedure (Doc ID 747457.1)", the article applies to 10g + database.
Upon completion of the conversion of single to the RAC, the follow-up to complete the adjustment DG parameters, adjust the IP address and other work to complete the cross-platform database migration and conversion work from the single to the RAC.