|
1. Oracle 12c PDB advantage of new features
1) can be integrated into a multiple PDB internet.
2) can quickly provide a new or an existing PDB PDB clone.
3) by pulling technique, it can quickly re-deploy the existing database to a new platform.
4) a plurality of PDB database patch or upgrade once.
5) by a single PDB swap to a different CDB higher version, you can upgrade or patch a PDB.
6) isolated from the contents of a PDB with a number of CDB in the PDB.
7) separate responsibilities of these PDB application administrator.
2. 12c PDB features New Features
1) In a CDB, you can have a lot of PDB.
2) PDB and common database versions prior to 12.1 is backward compatible.
3) PDB is transparent to the application - you do not need to change the client code or database objects.
4) RAC in each instance as a whole opened CDB (CDB and therefore one of the PDB database version is the same).
5) session only saw the PDB its own connection.
6) You can pull out from a CDB a PDB, and then insert another CDB.
7) You can clone PDB between the same or different CDB CDB.
8) With the PDB Explorer functionality can be extended.
9) through SQL statements implement entity PDB operation (creation, pull out, insert, clone, purge, set to open mode).
10) When connected to a so-called "root" (root), CDB administrator to perform these operations.
11) all PDB be a backup, but can be restored separately.
3. 12c PDB's Comments
1) Each PDB has its own private data dictionary database objects created by the user; on the other hand, CDB as a data dictionary also contains Oracle provides a whole system, in which each of the data dictionary to define their own namespace. In other words, there is a global data dictionary (CDB level) and local data dictionary (PDB level).
2) there is a new separate data dictionary architecture that allows a PDB be quickly pulled out and insert a different CDB from a CDB.
3) Each PDB only see Oracle provides read-only system definition.
4) global database parameters, there are also local database parameters. PDB parameters only belongs to a particular PDB, and after removing, PDB parameters will remain unchanged.
5) database users can be global (CDB) or locally (PDB). SYS and SYSTEM user is present initially in two DB. If you create a new user in the CDB, you can also see in the PDB this user. User-created in the PDB only be used in the PDB.
6) temporary table space can be global or local.
7) Redo logs and Undo tablespaces are global (CDB level).
8) Data Guard at CDB level as a whole to play a role; RMAN backup as a whole also scheduled to complete a level in the CDB; any time, you can only back up a PDB.
9) application to connect PDB, do not need to modify the code; the system administrator can connect CDB; connection string service name targeting PDB.
10) PDB allow clearer declaration defines an application; a PDB for the same CDB in other PDB ignorant; each PDB is closed containers. This ensures a new dimension of DB's independence and security.
4. Connect to a PDB
When you create a PDB, a service will be created in the PDB, and the service is as initialization container. You can display the current container using the following statement:
selectSys_Context ( 'Userenv', 'Con_Name') "current container" from dual;
In 12.1 SQL * Plus prompt, you can explicitly current container SHOW con_name.
When you create a PDB, will start the service. Although the metadata service is recorded in the PDB, PDB but the name and the name is the same. The session will be to create a user can not change the current container.
Client application code is typically designed to determine the connection outside the code description. For example: the code may use TNS alias, without changing the code to allow the case to change the connection string.
Of course, in a PDB you can have multiple services. Each would indicate that it is currently defined as the PDB initial container. Conventional methods can be used to create, maintain, remove the PDB additional services, but be sure not to remove the PDB default service. Container is the only way to establish the initial PDB a session is to identify a service.
The following example to see how easy connection to Orale12c grammar is called a "cdb1" the CDB, and is connected to one of the PDB:
sqlplusSys / Sys @ localhost: 1521 / cdb1 AS SYSDBA
CONNECTScott / tiger @ localhost: 1521 / My_PDB
5. Create and open a new Oracle12c pluggable database (PDB)
Now, we will create and open a new named my_pdb pluggable database (PDB). Each CDB has called PDB standard $ Seed of the PDB template. We actually create a new PDB by cloning the template. Look at the following examples:
sqlplussys / pass @ localhost: 1521 / cdb1 as sysdba
create pluggable database My_PDB
admin user App_Admin identified by pass
file_name_convert = ( '/ pdbseed /', '/ my_pdb /');
"File_name_convert" clause to determine how the new file name is derived from the template library, this point and we know rman almost. During the PDB is created, Oracle copy only system table space and sysaux two data files, undo, redo and so is the rest of the database file CDB global file, and they belong to a particular vessel, called the CDB $ Root.
"Admin user" clause is necessary, in the extended format, given to the new user permissions and roles, the user can only create a new session in my_pdb.
After creating pluggable database, the new PDB is MOUNTED mode. Before you create a new session in the new PDB, you must first open it. Therefore, we can open it with the following command:
alter pluggabledatabase My_PDB open;
6. Check the container database (CDB) and pluggable database (PDB) file
select con_id, tablespace_name, file_name
fromcdb_data_files
where file_Namelike '% / cdb1 / pdbseed /%'
or file_Namelike '% / cdb1 / my_pdb /%'
order by 1, 2;
CON_IDTablespace_Name File_Name
-------------------------------- ------------------ -----------------
2 SYSAUX /home/oracle/oradata/cdb1/pdbseed/sysaux01.dbf
2 SYSTEM /home/oracle/oradata/cdb1/pdbseed/system01.dbf
3 SYSAUX /home/oracle/oradata/cdb1/My_PDB/sysaux01.dbf
3 SYSTEM /home/oracle/oradata/cdb1/My_PDB/system01.dbf
7. Open all Oracle 12c Pluggable Databases (PDB)
RAC in each instance in the PDB for each CDB has its own open mode (Open_Mode) and limit state (Restricted status). Open mode possible value MOUNTED, READ ONLY, and READ WRITE; open when the PDB, the state may limit values YES and NO, otherwise null;
Start an instance (this time to open this CDB) will not open the PDB. "Alterpluggable database" statement is used to set the mode to open the PDB. In this SQL statement, you can give a specific name or PDB keyword "all", for example:
alter pluggabledatabase all open;
8. Close all Oracle12c CDB pluggable database
The following statement closes CDB all PDB:
alter pluggabledatabase all close;
9. clone with a CDB in an existing Oracle12cPDB
Below, we will clone a CDB with the existing PDB. For this reason, before you start cloning, you must first close the PDB, then open READ ONLY mode:
alterpluggable database My_PDB close;
alterpluggable database My_PDB open read only;
createpluggable database My_Clone
fromMy_PDB
file_name_convert = ( '/ my_pdb', '/ my_clone');
alterpluggable database My_PDB close;
alterpluggable database My_PDB open;
alterpluggable database My_Clone open;
10. Unplug pluggable database from the container database (CDB) in (PDB)
Next, show how the pull my_pdb from cdb1. "Into" the full path must be described with the PDB after keywords generated by the operation in XML format:
alterpluggable database My_PDB
unpluginto '/home/oracle/oradata/cdb1/my_pdb/my_pdb.xml';
"My_pdb.xml" file to determine the name of the data file and the full path and other information. This information will be used during the insertion operation. Note: PDB or part thereof from CDB pulled out, but now the state becomes unplugged (UNPLUGGED) only.
Unplug the data file operations actually made some changes in order to be properly recorded PDB successful pull. Because it was still part of the CDB, do you give it a RMAN backup. This provides a convenient way to archive logs pulled.
Once you have it backed up, you can remove it from the dictionary Curry - but, of course, for the next insertion operation, you have to keep these data files.
droppluggable database My_PDB keep datafiles;
11. Oracle 12c pluggable database - plug and cloning operations
11.1. The My_PDB insert cdb2
1) connected to the target container database, here is cdb2 directory / home / oracle / oradata / cdb2 under
sqlplus sys / pass @ localhost: 1521 / cdb2 as sysdba
2) Then, confirm to be inserted in the PDB and the new database is compatible with the main container
exec DBMS_PDB.Check_Plug_Compatibility (PDB_Descr_File => '/ home / oracle / oradata / cdb1 / my_pdb / my_pdb.xml');
If you are not compatible, the process will be given.
3) Now insert PDB. Must keep after using keywords PDB describe the absolute path, that is, .XML file generated when the previous removal operation.
create pluggable database My_PDB
using '/home/oracle/oradata/cdb1/my_pdb/my_pdb.xml'
move
file_name_convert = ( '/ cdb1 /', '/ cdb2 /');
alter pluggable database My_PDB open;
11.2. From the pull of PDB PDB create a clone
The example is recommended to keep a backup of the PDB is pulled out, it will be a lot of scenarios, for example:
1) In a development department that allows developers and testers fast, repetitive provide a start;
2) Support Self;
3) to provide a method for the delivery of new applications;
To illustrate, assume that you have pulled out MY_PDB1, and put it in the appropriate directory, and set to read-only.
create pluggable database MY_PDB1 as clone
using
'/home/oracle/oradata/bk_pdbs/my_pdb1/my_pdb1.xml'
copy
file_name_convert =
( '/ Bk_pdbs / my_pdb1 /', '/ cdb1 / my_pdb1 /');
alter pluggable database my_pdb1 open;
"As clone" SQL clause to ensure that the new PDB get a proper, global unique identification. Then you can see your GUIDs:
selectPDB_Name, GUID
fromDBA_PDBs
orderby Creation_scn;
Note: PBD is pulled out from the CBD and later insert another CBD, DBA_PDBs.GUID would have been associated with it. The server code will enforce uniqueness of the PDB in the CBD, but does not enforce uniqueness between the CBD.
11.3. The library as a non-CBD PDB insert an existing CBD library
Here, I'll show you how to put the database into a 12.1 before the PDB. There are several ways you do this:
1) transportable table space / data pump;
2) Copy;
3) to upgrade the original non-CBD library to 12c, and insert 12c CDB;
Because once the first two methods are standard methods, here we will describe only the last one.
Note: It is not a step to complete the upgrade of all tasks, it is a two-stage operation: First, your existing database upgrade to 12.1 non CDB library; Next, insert your non-existing CDB CDB library in - just insert PDB and then completed a post-insertion step.
The first step, before the first upgrade 12.1 library 12c version;
The second step, even to non-CDB library file to generate expressed as a PDB pull out part as follows:
shutdownimmediate
startupmount
alterdatabase open read only;
begin
DBMS_PDB.Describe (PDB_Descr_File => '/home/oracle/oradata/noncdb/noncdb.xml');
end;
/
shutdownimmediate
The next step is to connect to the receiving CDB library --cdb2, and non-cdb library data file into the file indicated with.
createpluggable database noncdb_pdb
asclone
using '/ home / oracle / oradata / noncdb / noncdb.xml'
source_file_name_convert = none
copy
file_name_convert = ( '/ noncdb /', '/ cdb2 / noncdb_pdb /')
storageunlimited;
Now open the library to finalize insert, closed, then open, the limit state is set to YES:
alterpluggable database noncdb_pdb open;
alterpluggable database noncdb_pdb close;
alterpluggable database noncdb_pdb open restricted;
Finally, SQL * Plus to run a script provided by Oracle to remove the current local data dictionary, because in the new version, metadata definitions Oracle system only stored once across the CDB.
altersession set container = ExNonCDB;
@? / Rdbms / admin / noncdb_to_pdb.sql
As a final step, open the newly received previous non-CDB library.
alterpluggable database noncdb_pdb open;
1. Enhanced Automatic Storage Management (ASM) terms
1.1. Flex ASM
In a typical installation of grid architecture, each node has its own ASM instance is running and play a role on the node storage container database for this installation configuration, there is a risk of a single point of failure. For example, if the ASM instance on a node failure or a problem, then run the database and instances on that node will be affected. In order to avoid a single point of failure ASM instance, Oracle12c provide a flex ASM feature. Flex ASM overall is a different concept and architecture, which only takes a few ASM instance running on a cluster a group of servers, but ASM instance on a node fails, Oracle cluster software will automatically various other nodes on the startup to replace the failed ASM ASM instance to ensure availability. In addition, this installation configuration also provides load-balancing capabilities running ASM instance on the node. Another benefit is that Flex ASM can be configured on a single node.
When you put the Flex Cluster selected as cluster installation options, Flext ASM configuration will automatically be selected, because Flex Cluster need Flex ASM. You can also choose to abandon conventional cluster Flext ASM. When you decide to use Flext ASM, you must confirm the determined network is available. When you can install a cluster, select Enable Flex ASM, can also ASMCA enable Flex ASM cluster environment using the standard.
The following command displays the current ASM mode:
$ ./asmcmd Showclustermode
$ ./srvctl Config asm
Or connect to the ASM instance, then the query INSTANCE_TYPE parameters. If the output value is ASMPROX, then explain Flex ASM is configured.
1.2 Added ASM Storage Limits
ASM ASM disk group storage and hard disk size limit is greatly increased. In 12Cr1 in, ASM disk groups supported from 11gR2 63 to 511, each ASM disk from now to 20PB of 32PB.
1.3. Adjustment ASM rebalance operation
12c new "EXPLAIN WORK FOR" statement can measure a ASM rebalancing operations workload, and the result is input V $ ASM_ESTIMATE dynamic view. Use this dynamic view, you can adjust the "POWER LIMIT" clause to improve the balancing operation. For example, if you want to measure the increase in the workload of a new ASM disk needed before the actual run manually rebalance operation, you can use the following statement:
SQL> EXPLAIN WORK FOR ALTERDISKGROUP DG_DATA ADD DISK data_005;
SQL> SELECT est_work FROMV $ ASM_ESTIMATE;
SQL> EXPLAIN WORK SET STATEMENT_ID = 'ADD_DISK'FOR ALTER DISKGROUP DG_DATA AD DISK data_005;
SQL> SELECT est_work FROMV $ ASM_ESTIMATE WHERE STATEMENT_ID = 'ADD_DISK';
You can adjust the POWER limit the output of the dynamic view to improve performance based on re-balancing operation.
1.4. ASM Disk Scrubbing
The new level of normal or high redundancy ASM disk group ASM Disk scrubbing operation, you can verify that ASM disk group ASM data for all logical disks crash, and automatically fix logical crashes. If detected, use ASM mirroring disks. Disk scrubbing can be performed on a disk pack, disk or file is determined, the impact is very small. The following examples illustrate the disk scrub scene:
SQL> ALTER DISKGROUP dg_data SCRUB POWERLOW: HIGH: AUTO: MAX;
SQL> ALTER DISKGROUP dg_data SCRUB FILE '+ DG_DATA / MYDB / DATAFILE / filename.xxxx.xxxx' REPAIR POWER AUTO;
1.5. Active Session History (ASH) for ASM
V $ ACTIVE_SESSION_HISOTRY dynamic views are now active sessions provide ASM instance sampling. However, the use of diagnostic kits require a license.
2. Grid (GridInfrastructure) architectural enhancements
2.1. Flex cluster
When performing cluster installation, Oracle 12c provides two types of cluster configurations: traditional standard Flex clusters and clusters. Traditional standard cluster, all nodes in the cluster are tightly integrated with each other through the private network interaction, and direct access storage. On the other hand, Flex cluster introduces two types of nodes, which are arranged in accordance with the framework Hub nodes and leaf nodes. Hub similar types of nodes and clusters of traditional standards, for example, they are interconnected by a private network, it can directly read and write memory. Hub nodes and leaf nodes are different. They do not have direct access to the underlying storage, but to store and access data through the Hub node.
You can configure the maximum 64 Hub nodes, you can configure a lot of leaf nodes. Flex in a cluster, you can configure the value Hub node without configuring a leaf node, but we can not just configure leaf nodes without configuring Hub node. You can configure multiple leaf nodes as a Hub node. In Oracle Flex cluster, only the Hub node can directly access the OCR / Voting disk. When you plan a large-scale cluster environment, this would be a great feature that can be used. This configuration greatly reduces the interconnection conflict, provides a traditional standard cluster scalable space.
There are two ways to deploy Flex cluster:
1) When configuring a new cluster;
2) upgrade from a standard cluster to cluster Flex;
If you are configuring a new cluster, you need to choose the third step type cluster configuration, choose to configure a Flex cluster option, and then you have to step into the sixth node Hub nodes and leaf nodes for each node, Select roles: Hub or leaves, in addition, you can select the virtual host name.
When converting from a standard cluster model to a Flex cluster mode requires the following steps:
1) the current state of the cluster with the following command to get the
$ ./crsctl Get cluster mode status
2) Run the following command as root
$ ./crsctl Set cluster mode flex
$ ./crsctl Stop crs
$ ./crsctl Start crs -wait
3) according to your design change the role of each node
$. / Crsctl get node role config
$ ./crsctl Set node role hub | leaf
$ ./crsctl Stop crs
$ ./crsctl Start crs -wait
note:
1) You can not be converted to standard Flex cluster cluster mode.
2) change the cluster node model requires stop / clusters.
3) Make sure that GNS is configured to hold a VIP.
2.2 Backup OCR in ASM disk group
12c in, OCR can now be backed up in the ASM disk group. This simplifies the OCR by each node of the backup file access. Recovery OCR, you do not have to worry about the last OCR backups on that node problem, determine only ASM latest backup, and can easily complete the recovery. The following example shows how the ASM disk group is set to OCR backup location:
$ ./ocrconfig -backuploc + DG_OCR
2.3. IPv6 support
Oracle 12c in, Oracel now supports IPv4 and IPv6 network protocols on the same network configuration. You can now configure the public network (Public / VIP) IPv4,, IPv6, or a combination of protocol configuration. However, in all nodes in the same cluster sure to use the same set of IP protocol configuration.
3. RAC (database) enhancements
3.1. What-If command to assess
Use srvctl command's new What-if command options assessment can determine the impact of the run command. This new option Srvctl command will allow you without the actual situation and the implementation of the current system to make a change, to simulate the command. When you want to make changes to the current system, but not sure what the result is when this option is especially useful. Therefore, this option will provide the results to make a change. -eval option can also be used with crsctl command. For example, if you want to know a particular database will be stopped what will happen, you can use the following example:
$ ./srvctl Stop database -d MYDB -eval
$ ./crsctl Eval modify resource < resource_name> -attr "value"
Srvctl improve all aspects of 3.2.
Srvctl some newly added command options. Following the instructions on the newly added start and stop the cluster database / instance resource options.
srvctlstart database | instance -startoption NOMOUNT | MOUNT | OPEN
srvctlstop database | instance -stopoption NOMOUNT | MOUNT | OPEN
1. Rename the data file online and Migration
I do not want the previous version, in Oracle12cR1, the data migration or rename files no longer require a series of steps, for example: As a tablespace read-only mode, then the logical data files and other operations. In 12cR1, the data file through the SQL statement "ALTERDATABASE MOVE DATAFILE" easily done online. In the process of moving the data file, the user can perform queries, DML and DDL task. In addition, data files can be migrated between storage, for example: from a non ASM to ASM, and vice versa.
1.1 Renaming data files:
SQL> ALTER DATABASE MOVE DATAFILE '/u01/data/users01.dbf'TO' /u01/data/users_02.dbf ';
1.2 from a non-ASM storage migration data files to ASM:
SQL> ALTER DATABASE MOVE DATAFILE '/ u01 / data / users_01.dbf' TO '+ DG_DATA';
Migrating data from one file to another ASM disk group:
SQL> ALTER DATABASE MOVE DATAFILE '+ DG_DATA / users_01.dbf' TO '+ DG_DATA_02';
1.3 If the data file in the new location is also present, the coverage data file of the same name:
SQL> ALTER DATABASE MOVE DATAFILE '/ u01 / data / users_01.dbf' TO '/u02/data_new/users_01.dbf' REUSE;
1.4 data files are copied to the new location, the old location to keep the old copy:
SQL> ALTER DATABASE MOVE DATAFILE '/ u00 / data / users_01.dbf' TO '/u00/data_new/users_01.dbf' KEEP;
Query the dynamic view v $ session_longops, you can monitor the process of moving data files. In addition, you can also refer to alert.log database, because details Oracle will write to the ongoing operation of the log.
2. Online migration table partition or sub-partition
In Oracle12c in the sub-partition table partition or move to a different table space operations no longer require a complicated process. Similar versions online before migrating a non-partitioned table, table partition or sub-partition can be online or offline to migrate to a different table space. When it is determined ONLINE clause, all DML operations on the migration process and the associated partition or sub-partition will not be interrupted. Conversely, if the partition or sub-partition is an offline migration, all DML operations will not be allowed.
The following are relevant examples:
SQL> ALTER TABLE table_name MOVE PARTITION | SUBPARTITIONpartition_name TO tablespacetablespace_name;
SQL> ALTER TABLE table_name MOVE PARTITION | SUBPARTITIONpartition_name TO tablespacetablespace_name UPDATE INDEXES ONLINE;
The first example is the table partition or sub-partition offline to migrate to a new table space. The second example is an online migration table partition or sub-partition, and at the same time maintain the local and global indexes on the table. In addition, when it is determined ONLINE clause, DML operations will not be interrupted.
note:
1) UPDATEINDEXES clause will prevent local and global indexes on the table becomes unavailable.
2) Table online migration limits in use here too.
3) During the migration process will be locked, which may result in reduced performance, and will generate a lot of redo, so look at the size of the partition or sub-partition.
3. Not visible columns
Oracle 11g R1, Oracle has not seen in terms of indexing and virtual column introduced several enhancements. The further development of these properties, Oracle12cR1 not see the introduction of the concept of the column. Remember, in previous releases, in order to hide some important data are listed in a normal query, we often create a view, in order to hide the necessary information or apply some security conditions.
12c R1, the table can be built in an invisible columns. When the column is defined as invisible, this column does not appear in the general query unless the conditions listed in the SQL statement or explicit reference is or is DESCRIBED in the table definition. Add or modify an invisible column is very easy, and vice versa.
SQL> CREATE TABLE emp (eno number (6), ename namevarchar2 (40), sal number (9) INVISIBLE);
SQL> ALTER TABLE emp MODIFY (sal visible);
To insert data into a column is not visible, you must explicitly reference it. Or a virtual column partitioning column can also be defined as invisible columns. However, temporary tables, external tables and clustered tables are not supported invisible columns.
4. build multiple indexes on the same column
Oracle 12c before, in the same column or the same you can not create multiple indexes on a set of columns. For example: If you have an index on the column (a) or column (a, b), you can not create another index on the same column or group of columns in the same order. 12c in the same column or group of columns that you can create multiple indexes, as long as the index can be of different types. However, at any time, there is only one index is available or visible. To test the index is not visible, you need to set parameters optimizer_use_use_invisible_indexes = true, the following is an example:
SQL> CREATEINDEX emp_ind1 ON EMP (ENO, ENAME);
SQL> CREATEBITMAP INDEX emp_ind2 ON EMP (ENO, ENAME) INVISIBLE;
5. DDL log
Previous versions, there is no record DDL operations command options. 12cR1, you can record to xml DDL operations and log files. Which want to know when, and who will be very useful for the implementation of the delete and create commands. To open this feature, you must configure the initialization parameters ENABLE_DDL_LOGGING, the parameters can be set in the database and session level. When this parameter is enabled, all DDL commands are recorded in the $ ORACLE_BASE / diag / rdbms / DBNAME / log | xml and log files under ddl inside. Each file contains xml like DDL commands, IP address, time stamp and other information. This helps to identify when a user or delete the table, or DDL statements when it is triggered.
5.1 To enable DDL logging:
SQL> ALTERSYSTEM | SESSION SET ENABLE_DDL_LOGGING = TRUE;
5.2 The following DDL statement is likely to be recorded in the log file and xml:
1) CREATE | ALTER | DROP | TRUNCATETABLE
2) DROPUSER
3) CREATE | ALTER | DROPPACKAGE | FUNCTION | VIEW | SYNONYM | SEQUENCE
6. Temporary Undo
Each Oracle database contains a system-dependent table space, for example: SYSTEM, SYSAUX, UNDO & TEMP, in the Oracle database table space for each of its different role. Before Oracle12cR1, undo records in the temporary table is generated undo tablespace, and undo record normal or permanent table is very similar to the storage. However, in 12cR1, the temporary undo records can now be stored in a temporary table and not stored in the undo table space. Comprises: reducing the occupied undo table space and reduce the redo data generated good this feature, because the information is not recorded in the redo log. You can enable a temporary undo option in the session or database level.
6.1. Provisional Enable undo
To use this new feature, you need to make the following settings
1) Compatibility parameter must be set to 12.0.0 or higher.
2) Turn TEMP_UNDO_ENABLED initialization parameters.
3) Now that the temporary undo records are stored in a temporary table space, temporary table space you need to make sure there is enough space.
4) You can use this command to open a temporary session level undo feature: ALTERSESSION SET TEMP_UNDO_ENABLE = TRUE;
6.2. Provisional undo information query
Below are listed the dictionary view used to browse or query statistics about temporary undo data:
1) V $ TEMPUNDOSTAT
2) DBA_HIST_UNDOSTAT
3) V $ UNDOSTAT
6.3 In order to turn this feature off, you only need to do the following settings:
SQL> ALTER SYSTEM | SESSION SET TEMP_UNDO_ENABLED = FALSE;
7. Back to determine user permissions
In the 11g R2, SYSASM ASM authority was introduced to perform specific operations. Similarly, in 12c, but also the introduction of the privileges to perform backup and restore of specific operations SYSBACKUP, in order to perform backup and recovery operations in the RMAN. So you can create a local user in the database, then do not need SYSDBA privileges, but simply granted SYSBACKUP permissions to perform backup and recovery tasks in the RMAN.
$ ./rman Target "username / password asSYSBACKUP"
8. How to execute SQL statements in the RMAN
12c, when you can not bring your RMAN SQL prefix to execute any SQL and PL / SQL commands, you can execute any SQL and PL / SQL commands directly in the RMAN. In RMAN how to execute SQL statements do?
RMAN> SELECT username, machine FROM v $ session;
RMAN> ALTER TABLESPACE users ADD DATAFILE SIZE 500m;
9. RMAN restore and partition table
Oracle database backup can be divided into two categories: logical and physical. Each backup type has its own advantages and disadvantages. In previous versions, to restore a table with the existing partition or inappropriate physical backup in order to restore a particular object, you must have a logical backup. 12cR1, when accidental deletion or truncation occurs, you RMAN backups from a particular table or partition to be restored to a point in time or SCN.
9.1 When starting to recover a table or partition by RMAN, you need to do the following:
1) Prepare the need to restore the table or partition backup sets.
2) in the table or partition recovery process, the need to temporarily configure a secondary database to a point in time.
3) need to be exported into a table or partition dumpfile data pump.
4) Alternatively, you can import source library or partition table.
5) You can rename the recovery.
9.2 by RMAN to a point in time recovery table example (make sure you already have a full database backup earlier):
RMAN> connect target "username / password as SYSBACKUP";
RMAN> RECOVER TABLE username.tablenameUNTIL TIME 'TIMESTAMP ...'
AUXILIARYDESTINATION '/ u01 / tablerecovery'
DATAPUMPDESTINATION '/ u01 / dpump'
DUMPFILE 'tablename.dmp'
NOTABLEIMPORT - This option avoids table is automatically imported.
REMAPTABLE 'username.tablename': 'username.new_table_name'; - Use this option to rename the table.
9.3 Note:
1) Make sure in / u01 filesystem has enough free space available to hold the secondary database files and data pump.
2) There must be a full database backup, or at least related to the SYSTEM table space exists.
The following restrictions apply RMAN recovery or partition table:
1) SYS user or partition table can not be recovered.
2) or partition table stored in the SYSAUX and SYSTEM table space can not be restored.
3) You can not use REMAP option to restore a table contains a NOT NULL constraint.
10. PGA size limit
Before Oracle12c R1, there is no option to limit and control the size of the PGA. Although you can PGA_AGGREGATE_TARGET set to a certain value, Oracle or can be based on load and dynamic demand increases or decrease the size of the PGA. 12c, and by enabling automatic PGA management to set a hard limit to the PGA, PGA enable automatic management requirements set PGA_AGGREGATE_LIMIT parameters. So, by now you set this parameter to set the new PGA hard limit to avoid excessive use of PGA:
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 2G;
SQL> ALTER SYSTEM SET PGA_AGGREGATE_LIMIT = 0; --disables the hard limit
note:
When more than the current limit of PGA, Oracle will automatically terminate most can not keep hold of the PGA memory session.
1. Executive Summary
1) partition table maintenance enhancements.
2) improved database upgrade.
3) across the network to restore / recover data files.
4) Data Pump enhancement.
5) Real-Time ADDM.
6) Concurrent statistics collection.
2. The partition table maintenance enhancements
In other articles, I explained how the online or offline to a table partition or sub-partition to a different table space. In this section, you will learn and enhance other aspects of table partitioning related.
2.1. Increase the number of new partitions.
Before 12c R1, on a partitioned table can only be increased once a partition. In order to increase the number of partitions, you must perform each new partition a single ALTER TABLE ADD PARTITION statement. 12c with a support ALTER TABLE ADD PARTITION command to increase the number of new partitions. The following example shows how to partition on an existing table to add multiple new partition:
SQL> CREATE TABLE emp_part (eno number (8), ename varchar2 (40), salnumber (6))
PARTITION BY RANGE (sal)
(PARTITION p1 VALUES LESS THAN (10000),
PARTITION p2 VALUES LESS THAN (20000),
PARTITION p3 VALUES LESS THAN (30000)
);
Let us increase the number of new partition:
SQL> ALTER TABLE emp_part ADD PARTITION
PARTITION p4 VALUES LESS THAN (35000),
PARTITION p5 VALUES LESS THAN (40000);
Also, you can give a list and increase the number of new partition system partition table, provided that these do not exist maxvalue partition partition table.
2.2. If you delete and truncate multiple partitions or sub-partitions
As part of the maintenance data, typically you can delete or truncated partition partition table. 12c R1 ago, only one at a time to delete or truncate the partition table partition. 12c, and with a ALTER TABLE table_name {DROP | TRUNCATE} PARTITIONS command to delete or truncate multiple partitions or sub-partitions. The following example shows how to delete or truncate multiple partitions on a partitioned table:
SQL> ALTER TABLEemp_part DROP PARTITIONS p4, p5;
SQL> ALTER TABLEemp_part TRUNCATE PARTITONS p4, p5;
In order to also maintain the index using UPDATE INDEXES or UPDATE GLOBAL INDEXES clause,
as follows:
SQL> ALTERTABLE emp_part DROP PARTITIONS p4, p5 UPDATE GLOBAL INDEXES;
SQL> ALTERTABLE emp_part TRUNCATE PARTITIONS p4, p5 UPDATE GLOBAL INDEXES;
Without if you delete or truncate partition UPDATE GLOBAL INDEXES clause, you can query USER_INDEXES or ORPHANED_ENTRIES USER_IND_PARTITIONS dictionary views, in order to discover whether the index contains outdated index entries.
2.3 will be split into multiple partitions a new partition
12c, the new enhanced SPLITPARTITION clause allows you to use a command to a specific partition or sub-partition split into multiple new partition:
SQL> CREATE TABLE emp_part
(Eno number (8), enamevarchar2 (40), sal number (6))
PARTITION BY RANGE (sal)
(PARTITION p1 VALUES LESS THAN (10000),
PARTITION p2 VALUES LESS THAN (20000),
PARTITION p_max VALUES LESSTHAN (MAXVALUE)
);
SQL> ALTER TABLE emp_part SPLIT PARTITION p_max INTO
(PARTITION p3 VALUES LESS THAN (25000),
PARTITION p4 VALUES LESS THAN (30000), PARTITION p_max);
2.4. Multiple partitions into one partition
You can use a ALTER TBALE MERGE PARTITIONS statement to merge multiple partitions into one partition:
SQL> CREATETABLE emp_part
(Eno number (8), ename varchar2 (40), salnumber (6))
PARTITION BY RANGE (sal)
(PARTITION p1 VALUES LESS THAN (10000),
PARTITION p2 VALUES LESS THAN (20000),
PARTITION p3 VALUES LESS THAN (30000),
PARTITION p4 VALUES LESS THAN (40000),
PARTITION p5 VALUES LESS THAN (50000),
PARTITION p_max (MAXVALUE)
);
SQL> ALTERTABLE emp_part MERGE PARTITIONS p3, p4, p5 INTO PARTITION p_merge;
If the partition contains the partition key range, you can also use the following example command:
SQL> ALTERTABLE emp_part MERGE PARTITIONS p3 TO p5 INTO PARTITION p_merge;
3. The improved database upgrade
Whenever a new version is released every DBA must face it is to upgrade immediately. This section will explain the introduction of two new upgrade to 12c improvement.
3.1. The pre-upgrade script
12c R1, and a new, greatly improved to give information before the upgrade script preupgrd.sql, replaces the previous utlu [121] s.sql script. In addition to doing the pre-upgrade check verification, the script also the manner fixup script to solve various problems before and after the upgrade process. fixup script is generated that can be executed to address different aspects of the problem. For example: pre-upgrade and upgrade. When upgrading the database manually, before starting the actual database upgrade, the script must be run manually. However, when used DBUA tool to upgrade the databases will be upgraded as part of the process, before the upgrade script to automatically run when any error occurs, you will be prompted to run the fixup script. The following example shows how to run the script:
SQL> @ $ Oracle_12GHOME / rdbms / admin / preupgrd.sql
The above script generates a log file and a [pre / post] upgrade_fixup.sql script. All these documents are in the $ ORACLE_BASE / cfgtoollogs directory. Before you go on a real upgrade, you should view the log and run log recommended actions and scripts to resolve any problems.
Note: Make sure you put preupgrd.sql 12c and utluppkg.sql scripts from the Oracle home directory home / rdbms / admin copied to the current database / rdbms / admin position.
3.2. Parallel updates
Component database and database configuration upgrade time proportional to the number, and not proportional to the size and database. In earlier versions, there is no direct or indirect options available to parallel the rapid completion of the upgrade process.
The 12c R1 catctl.pl (parallel upgrade function) to replace the previous catupgrd.sql script, 12c R1 in the script there is an option to upgrade in parallel, which can shorten the time required to complete the entire upgrade process. The following procedure describes how to start parallel (3 process) upgrade feature to upgrade mode to start the database and then run the following command:
cd $ ORACLE_12_HOME / perl / bin
$ ./perl Catctl.pl -n 3 -catupgrd.sql
When the database is upgraded manually, these two steps need to be explicitly run. However, DBUA tool contains the above two changes.
4. Restore the network / restore data files
12c R1, and another great enhancement, you can now service name (service name) to restore or recover data files between the primary and secondary database, control files, spfile, table space or the entire library. When synchronizing the primary library, which is very useful.
When they find a long delay between the primary and secondary libraries (gap), you do not need complex roll-forward procedure to populate the delay between the master and slave libraries. RMAN incremental backups through the network to obtain and use these backups on a physical standby database to restore backup library. As already stated, you can use the service name directly to the need to copy data files from the backup repository to the main repository, for example: a data file or table space on the primary database lost, or no data to restore files from a backup set, etc. .
The following procedure demonstrates how to use the new features before rolling to perform synchronized standby database:
End physical standby database:
./rman target "username / password @ standby_db_tns as SYSBACKUP"
RMAN> RECOVERDATABASE FROM SERVICE primary_db_tns USING COMPRESSED BACKUPSET;
The above example uses connection string primary_db_tns equipment connected to the main library on the library to perform an incremental backup, incremental backup and then transfer them to the standby database and apply these files to synchronize backup library. However, you need to make sure the library has been prepared by the end of the configured connection string pointing to the main library primary_db_tns.
In the following example, I will get through and recover lost data files from the main library by the library to illustrate a scene:
Master side:
./rman target "username / password @ primary_db_tns as SYSBACKUP"
RMAN> RESTOREDATAFILE '+ DG_DISKGROUP / DBANME / DATAFILE / filename' FROM SERVICE standby_db_tns;
5. Enhanced Data Pump
This section introduces the enhanced data pump area. There are many useful enhancements, such as: When attempting to export into the table, turn off logging when you import the like.
5.1. Close the log generation
Data Pump introduced into introduction of new TRANSFORM option supports the import process does not produce objects redo. When determining the value DISABLE_ARCHIVE_LOGGING TRANSFORM option, the entire import process context object will not generate redo. When importing large tables, this feature will greatly reduce the system pressure and the resulting redo, and thus slow down the import process. This feature can be applied to tables and indexes, the following example illustrates this feature:
$ ./impdp Directory = dpump dumpfile = abcd.dmp logfile = abcd.logTRANSFORM = DISABLE_ARCHIVE_LOGGING: Y
5.2. The views into the table
This is another enhancement data pump. Use VIEWS_AS_TABLES option, you can export a view to a table. The following example describes how during export export the view to a table:
$ ./impdp Directory = dpump dumpfile = abcd.dmp logfile = abcd.logTRANSFORM = DISABLE_ARCHIVE_LOGGING: Y
6. ADDM analysis in real time
By like AWR, ASH, and tools such as ADDM to analyze past and current health status of the database is part of every DBA life. Although each of the tools to measure the overall health and performance of the database at all levels, but did not respond to the entire database or "hang", every tool can be used.
When you encounter a database did not respond or hang when you live, if you have configured Oracle Enterprise Manager 12c Cloud Control, you will be able to diagnose severe performance issues. This will give you a description of what the current database in the overall occurrence, maybe will give you a solution to the issue of remedies.
Here's a step by step description of how to use Oracle Enterprise Manager Cloud Control to analyze the state of the database:
1) access to the database on the home page select EmergencyMonitoring option from the Performance menu, which will display hang analysis table at the top of the blocking session.
2) Choose from the Performance Real-TimeADDM option to perform real-time ADDM analysis.
3) to collect performance data, click on the page to get all the interaction Findings summary found.
7. The parallel collection of statistical information in multiple tables
In previous versions of Oracle, whenever you run DBMS_STATS procedure to collect table, index, schema, or database-level statistics, Oracle is collected once a table. If the table is large, it is recommended to increase the degree of parallelism. 12c R1, you can now collect statistics on multiple tables, and child partitions simultaneously. When you start using this feature, you must make the following settings at the database level to enable this feature:
SQL> ALTER SYSTEM SET RESOURCE_MANAGER_PLAN = 'DEFAULT_MAIN';
SQL> ALTER SYSTEM SET JOB_QUEUE_PROCESSES = 4;
SQL> EXEC DBMS_STATS.SET_GLOBAL_PREFS ( 'CONCURRENT', 'ALL');
SQL> EXEC DBMS_STATS.GATHER_SCHEMA_STATS ( 'SCOTT'); |
|
|
|