Tuesday, December 24, 2013

What could make your query suddenly run slow .

Many times I had been asked from Developer Team "Why this query become slow when it used to run quickly?". I think Jonathan Lewis well answered this question as follows:

  1. A change in execution plan due to unlucky bind variable peeking with histograms
  2. A change in execution plan because the statistics have been slowly going out of sync with the data/query
  3. A change in execution plan because new (and unlucky) statistics have been generated since the last time query ran.
  4. A change in execution plan because a few new, empty, partitions have been added to a critical table
  5. An unlucky change in execution plan because a small change in actual data volume (with correct stats in place) can have a dramatic impact on the shape of the plan
  6. A change in execution plan because you had an incomplete set of hints in your SQL and your luck just ran out
  7. An unlucky change in data order that has a big impact on the success of subquery caching
  8. A small slippage in timing that results in the SQL suffering a direct data collision (locking / read-consistency) with some other process.
  9. A small slippage in timing that leaves the SQL running concurrently with something that is processing unrelated data but competing for the raw (disc / CPU / PX Slave) resources.
  10. A small slippage in timing that means some other process cleared everything you wanted from the buffer cache before your SQL started running.
  11. A small slippage in timing that means a process that normally follows your SQL preceded it, and you’re spending all your time on delayed block cleanout.
  12. Someone dropped some indexes (or made them invalid)
  13. Someone rebuilt one or more indexes since the last time the SQL executed
  14. Someone has done a “shrink space” on a critical table since the last time you ran the SQL
Reference:

Monday, December 23, 2013

How to convert Partition table from RANGE to INTERVAL



This topic is pursue to previous tips by Burleson Team to show in steps how to convert a partition table from range to interval in order to avoid ORA-14400 error described before.
Let’s assume our partition table as below:

SQL> CREATE TABLE SALES
  2    (
    SALES_DATE DATE not null,
    SALES_DESC VARCHAR2(100)
  5    )
  6   PARTITION BY RANGE (SALES_DATE)
  7    (
    PARTITION P1  VALUES LESS THAN (TO_DATE('2012-01-01', 'YYYY-MM-DD') ),
    PARTITION P2  VALUES LESS THAN (TO_DATE('2012-07-01', 'YYYY-MM-DD') ),
 10   PARTITION P3  VALUES LESS THAN (TO_DATE('2013-01-01', 'YYYY-MM-DD') ),
 11   PARTITION PMAX VALUES LESS THAN (MAXVALUE));
Table created.

 The first step is to ensure that we don’t have MAXVALUE partition to avoid:
ORA-14759: SET INTERVAL is not legal on this table.

SQL> select partition_name, interval, high_value from user_tab_partitions where table_name = 'SALES' order by partition_position;

PARTITION_NAME  IN HIGH_VALUE
--------------- -- ----------------------------------------------
----
P1              NO TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

P2              NO TO_DATE(' 2012-07-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

P3              NO TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA


PMAX            NO MAXVALUE

As described above that SALES table have a MAXVALUE partition and that would prevent the start of interval partitioning. Since this partition made to avoid outage insert, it should have zero row so we can drop it with no problem otherwise backup your data and truncate the partition and then drop it.

SQL> select count(*) from SALES partition (PMAX);
  COUNT(*)
----------
         0

SQL> alter table SALES drop partition PMAX;

Table altered.

In the next step we will convert the SALES table to interval and define the interval period of each partition.
Here I’m going to define 6 months interval:
SQL> alter table SALES set INTERVAL(NUMTOYMINTERVAL(6, 'MONTH'));

Table altered.

SQL> select partition_name, interval, high_value from user_tab_partitions where table_name = 'ORDERS' order by partition_position;
PARTITION_NAME  IN HIGH_VALUE
--------------- -- ----------------------------------------------
----
P1              NO TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

P2              NO TO_DATE(' 2012-07-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

P3              NO TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA



Next, let’s test it by insert a row out of partitioned date range.

SQL> insert into SALES values(TO_DATE('2013-05-10', 'YYYY-MM-DD'),’DELL PC 930’ );
SQL> commit;
Commit complete.
SQL> select partition_name, interval, high_value from user_tab_partitions where table_name = 'ORDERS' order by partition_position;
PARTITION_NAME  IN HIGH_VALUE
--------------- -- ----------------------------------------------
----
P1              NO TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

P2              NO TO_DATE(' 2012-07-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

P3              NO TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA
SYS_P731        YES TO_DATE(' 2013-07-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

The newly created partition has a system generated name and YES value for INTERVAL column and it has been allocated the right 6 months which is defined previously.

Finally, easily we can rename this system generated name to any name by:
SQL> alter table SALES rename partition SYS_P731 to P4;
Table altered.

SQL> select partition_name, interval, high_value from user_tab_partitions where table_name = 'ORDERS' order by partition_position;
PARTITION_NAME  IN HIGH_VALUE
--------------- -- ----------------------------------------------
----
P1              NO TO_DATE(' 2012-01-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

P2              NO TO_DATE(' 2012-07-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA

P3              NO TO_DATE(' 2013-01-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA
P4              YES TO_DATE(' 2013-07-01 00:00:00', 'SYYYY-MM-DD HH24:
                                  MI:SS', 'NLS_CALENDAR=GREGORIA



Sunday, December 22, 2013

Oracle Interval Partitioning Tips

I just found this topic much wonderful and helpful and there is no rebloging tool on dba-oracle.com so I copied the same.

Interval partitioning is an enhancement to range partitioning in Oracle 11g and interval partitioning automatically creates time-based partitions as new data is added.
Range partitioning allows an object to be partitioned by a specified range on the partitioning key.  For example, if a table was used to store sales data, it might be range partitioned by a DATE column, with each month in a different partition. 
Therefore, every month a new partition would need to be defined in order to store rows for that month.  If a row was inserted for a new month before a partition was defined for that month, the following error would result:
ORA-14400: inserted partition key does not map to any partition 
If this situation occurs, data loading will fail until the new partitions are created.  This can cause serious problems in larger data warehouses where complex reporting has many steps and dependencies in a batch process.  Mission critical reports might be delayed or incorrect due to this problem. 

An Interval Partitioning Example

Interval partitioning can simplify the manageability by automatically creating the new partitions as needed by the data.  Interval partitioning is enabled in the table’s definition by defining one or more range partitions and including a specified interval.  For example, consider the following table:
create table
pos_data (
   start_date        DATE,
   store_id          NUMBER,
   inventory_id      NUMBER(6),
   qty_sold          NUMBER(3),
)
PARTITION BY RANGE (start_date)
INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
(
   PARTITION pos_data_p2 VALUES LESS THAN (TO_DATE('1-7-2007', 'DD-MM-YYYY')),
   PARTITION pos_data_p3 VALUES LESS THAN (TO_DATE('1-8-2007', 'DD-MM-YYYY'))
);
 
Here, two partitions have been defined and an interval of one month has been specified.  If data is loaded into this table with a later date than the greatest defined partition, Oracle will automatically create a new partition for the new month.  In the table above, the greatest defined interval is between July 1, 2007 and August 1, 2007. 
Inserting a row that has a date later than August 1, 2007 would raise an error with normal range partitioning.  However, with interval partitioning, Oracle determines the high value of the defined range partitions, called the transition point, and creates new partitions for data that is beyond that high value.
insert into pos_data (start_date, store_id, inventory_id, qty_sold)
values ( '15-AUG-07', 1, 1, 1);
SELECT
   TABLE_NAME,
   PARTITION_NAME,
   PARTITION_POSITION,
   HIGH_VALUE
FROM
   Remote DBA_TAB_PARTITIONS
WHERE
   TABLE_NAME='POS_DATA'
ORDER BY
   PARTITION_NAME;
PARTITION_NAME    HIGH_VALUE 
POS_DATA_P0       TO_DATE(' 2007-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
POS_DATA_P1       TO_DATE(' 2007-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P81    TO_DATE(' 2007-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
Notice that a system generated partition named SYS_P81 has been created upon inserting a row with a partition key greater than the transition point.  Oracle will manage the creation of new partitions for any value beyond the high value.  Therefore, the values do not need to be inserted in sequence.
Since the partitions are named automatically, Oracle has added a new syntax in order to reference specific partitions effectively.  The normal way to reference a specific partition is to use the partition (partition_name) in the query:
select
   *
from
   pos_data partition (SYS_P81);
However, it would be cumbersome to look up the system generated partition name each time.  Therefore, the new syntax to specify a partition is by using the partition for (DATE) clause in the query:
select
   *
from
   pos_data partition for (to_date('15-AUG-2007','dd-mon-yyyy')); 
Another useful feature of partitioning is the ability to distribute partitions across different tablespaces.  With interval partitioning, this can be accomplished by naming all of the tablespaces in the table definition’s “store in” clause.  The system created partitions are then assigned to different tablespaces in a round robin manner.  For example, if the choice was to distribute the table across three tablespaces - tablespaceA, tablespaceB, and tablespaceC - use the following clause in the table definition.
INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
STORE IN (tablespaceA, tablespaceB, tablespaceC)

Restrictions on  Interval Partitioning

There are a few restrictions on interval partitioning that must be taken into consideration before deciding if it is appropriate for the business requirement:
  • Cannot be used for index organized tables
  • Must use only one partitioning key column and it must be a DATE or NUMBER
  • Cannot create domain indexes on interval partitioned tables
  • Are not supported at the sub-partition level
This feature should be used as an enhancement to range partitioning when uniform distribution of range intervals for new partitions is acceptable.  If the requirement demands the use of uneven intervals when adding new partitions, then interval partitioning would not be the best solution.

Interval Partitioning Commands

There are a few new commands to manage interval partitioning.  First, convert a range partitioned table to use interval partitioning by using alter table <table_name> set interval(expr).
Consider this range partitioned table:
create table
pos_data_range (
   start_date        DATE,
   store_id          NUMBER,
   inventory_id      NUMBER(6),
   qty_sold          NUMBER(3)
)
   PARTITION BY RANGE (start_date)
(
   PARTITION pos_data_p0 VALUES LESS THAN (TO_DATE('1-7-2007', 'DD-MM-YYYY')),
   PARTITION pos_data_p1 VALUES LESS THAN (TO_DATE('1-8-2007', 'DD-MM-YYYY'))
);
 
If a row with a date of August 15, 2007 is inserted into the table, it will cause an error.
SQL> insert into pos_data_range (start_date, store_id, inventory_id, qty_sold)
  2  values ( '15-AUG-07', 1, 1, 1);
insert into pos_data_range (start_date, store_id, inventory_id, qty_sold)
            *
ERROR at line 1:
ORA-14400: inserted partition key does not map to any partition
This range partitioned table can easily be converted to use interval partitioning by using the following command:
alter table pos_data_range set INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'));
Interval partitioning is now enabled, and the row with 15-AUG-07 can be inserted without error since Oracle will automatically create the new partition. To convert the table back to only range partitioning, use the following command:
alter table pos_data_range set INTERVAL();
The table is converted back to a range partitioned table and the boundaries for the interval partitions are set to the boundaries for the range partitions.
Using the same syntax, the interval can also be changed for existing interval partitioned tables.  If changing the original table to be partitioned every three months instead of monthly, use:
alter table pos_data set INTERVAL(NUMTOYMINTERVAL(3, 'MONTH'));
After inserting a row with the date of 15-NOV-07, a new partition is automatically generated with a high value of 01-DEC-07.
insert into
   pos_data (start_date, store_id, inventory_id, qty_sold)
values
   ('15-NOV-07', 1, 1, 1);
SELECT
   TABLE_NAME, PARTITION_NAME, PARTITION_POSITION, HIGH_VALUE
FROM
   Remote DBA_TAB_PARTITIONS
WHERE 
   TABLE_NAME='POS_DATA'
ORDER BY
   PARTITION_NAME;
PARTITION_NAME    HIGH_VALUE 
POS_DATA_P0       TO_DATE(' 2007-07-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
POS_DATA_P1       TO_DATE(' 2007-08-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P81    TO_DATE(' 2007-09-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
SYS_P84    TO_DATE(' 2007-12-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')
The tablespace storage of the interval partitioned table can also be changed using a similar syntax.  For example, when using a round robin tablespace assignment for the table between tablespace1 to tablespace3, issue the following command:
alter table pos_data set STORE IN(tablespace1, tablespace2, tablespace3);
Oracle interval partitioning offers a very useful extension to range partitioning.  This greatly improves the manageability of range partitioned tables.  In addition to providing system generated new partitions, Oracle has provided a new syntax to simplify the reference of specific partitions.  Furthermore, Oracle offers a group of commands to manage the new partitioning option. 


Interval Partitioning for Oracle 11g
By Ben Prusinski
Interval partitions build upon the range partitioning for Oracle 11g. Interval partitioning resolves the limitations built into range partitioning when a specific range is unknown by the  developer or DBA creating the partitions for the table. It tells Oracle to automatically setup new  partitions for a particular interval when data inserted to tables are greater than the range partitions. 
As such the requirement of interval partitioning dictates that at least one range partition is specified. Once the range partitioning key is given for the high value of the range partitions, this transition point is used  as the baseline to create interval partitions beyond this point.
The nice thing about the new interval  partitioning feature for Oracle 11g is that it eases the management of new partitions for the busy  Oracle DBA or development staff. The following exercise will demonstrate how interval partitioning  works with Oracle 11g. First we will create a new table to use for interval partitioning:
SQL> CREATE TABLE sales_interval
  2  (product_id                NUMBER(6),
  3  customer_id                NUMBER,
  4  time_id                    DATE,
  5  channel_info               CHAR(1),
  6  promo_id                   NUMBER(6),
  7  qty_sold                   NUMBER(3),
  8  amt_sold                   NUMBER(10,2)
  9  )
 10  PARTITION BY RANGE (time_id)
 11  INTERVAL(NUMTOYMINTERVAL(1, 'MONTH'))
 12  (PARTITION t0 VALUES LESS THAN (TO_DATE('1-1-2005','DD-MM-YYYY')),
 13  PARTITION t1 VALUES LESS THAN (TO_DATE('1-1-2006','DD-MM-YYYY')),
 14  PARTITION t2 VALUES LESS THAN (TO_DATE('1-7-2006','DD-MM-YYYY')),
 15  PARTITION t3 VALUES LESS THAN (TO_DATE('1-1-2007','DD-MM-YYYY')) ); 
Table created.

Figure 1.8: Interval Partitioning Example with Oracle 11g
The above statement creates a table with four new interval based partitions using a one month period   for the width of the interval with January 1, 2007 as the transition point with t3 partition as the high  bound and the other partitions (t0-t2) as the range section with all partitions above it occur in the  interval range. However, like other forms of partitioning there are some limitations as follows.
 Interval partitioning restrictions include:
  • Index Organized tables (IOTs) are not supported by interval partitioning.
  • Domain index cannot be created on interval partitioned tables.
  • Only one partitioning key column can be set for the interval partition and it must be either a DATE or NUMBER data type.

 Reference:
http://www.dba-oracle.com/t_interval_partitioning.htm

Sunday, December 15, 2013

Oracle 12c Database Installation

This is a step by step Oracle Database 12c Release 1 installation on Windows Server 2008 (64x).
After download the software from Oracle website, unzip downloaded files into one folder and then run the setup file you need to do the following:


     1-      After launching Oracle Installer, you may provide your email and Oracle Support password, otherwise uncheck the checkbox and press NEXT.


     2-      The next screen you will be asked if you want to download latest updates from oracle then enter your Oracle Support credential, for test purpose I’m going to select Skip Software Updates


     3-      Select your installation options whether to install only binaries or binaries along with upgrade current database or create new database. Choose your appropriate radio button and click Next.


     4-      In grid installation options screen select the type you want to perform according to the plan of your installation either Single node or RAC or RAC one node if have one node you are planning to add more nodes in future.


     5-      Select your additional language you might use for your database.


     6-      Select your edition which you are going to license.


     7-      One of the new features is Oracle Home User which used to run windows services for Oracle Home. If you already have an account just enter it along with its password or create a new user and that what I’m going to do.

Note: Oracle 12c supports to use non administrator account and use a standard account.


     8-      Enter your Oracle Base and Oracle Home and then click Next.


     9-      Perform Prerequisite Checks screen will check the satisfaction of the minimum requirements.

The beauty here is that you can take an action to fix come up issues without need to re launch the installer once again as it was in previous Oracle versions.


     10-      If everything goes smooth you will end up with Summary screen, showing and summarizing your inputs. Click Install after review your inputs and if you require changing some options, click edit beside the option.


      The remains two steps only to show the installation progress and the last step tells that you have finished the installation successfully, then click Finish.


Congratulations now you have the latest version of Oracle Database installed at you PC or your Server. 




Tuesday, December 3, 2013

11gR2 RAC-RAC Data Guard

Technical Architecture of 11g R2 RAC primary to RAC standby Data Guard Configuration:

PARAMETERS
PRIMARY
STANDBY
Clusterware
11g R2 Grid Infrastructure (11.2.0.3)
11g R2 Grid Infrastructure
 (11.2.0.3)
Cluster Nodes
psrv1, psrv2
(2-node RAC)
drsrv1, drsrv2
(2-node RAC)
SCAN
psrv-scan.feco.com
drsrv-scan.hamid.com
SCAN listener Host/port
SCAN VIPs (port 1521)
SCAN VIPs (port 1521)
VIPs
psrv1-vip, psrv2-vip
drsrv1-vip, drsrv2-vip
DB_UNIQUE_NAME
pdb
drdb
DB_NAME
pdb
pdb
DB Instances
pdb1, pdb2
drdb1, drdb2
DB LISTENER
pdb_LISTENER
pdb_LISTENER
DB Listener Host/port
psrv1-vip, psrv2-vip
drsrv1-vip, drsrv2-vip
DB STORAGE
ASM
ASM
File Management
OMF
OMF
ASM diskgroup for DB files
DATA
DATA
ASM Diskgroup for Recovery Files
FRA
FRA
ORACLE_HOME
E:\app\oracle\product\11.2.0\dbhome_1
E:\app\oracle\product
\11.2.0\dbhome_1
11g R2 RAC version
11.2.0.3
11.2.0.3
OS
Windows server 2008 R2
Windows server 2008 R2










     ·         2-node 11g R2 Grid Infrastructure (11.2.0.3) has been installed and configured.
     ·         2-node RAC software (11.2.0.3) has been installed and configured.
     ·         ASM diskgroup DATA and FRA has been created.
     ·         Database “PDB” is created on ASM and configured to use OMFs.
     ·         Database Instances are configured with LOCAL_LISTENER and REMOTE_LISTENER parameters.

DR Site: 

     ·         2-node 11g R2 Grid Infrastructure (11.2.0.3) has been installed and configured.
     ·         2-node RAC software (11.2.0.3) has been installed and configured.
     ·         ASM diskgroup DATA and FRA has been created.


RAC to RAC Physical Dataguard: 

     ·         Prepare Primary Site
     ·         Prepare Standby Site
     ·         Create RAC Physical Standby Database
     ·         Configure Data Guard Broker
     ·         Perform Switchover using DG Broker

Prepare Primary RAC database for DR configuration: 

     ·         Enable Force Logging.
     ·         Modify init Parameters.
     ·         Enable Archivelog Mode.
     ·         Create the SLRs (Standby Redo Logs).
     ·         Backup the Database for Standby
     ·         Create pfile for standby database.
     ·         Update the tnsnames.ora.

Enable Force Logging: 

alter database force logging; 

Modify Dataguard related init Parameters:

 Those are the parameters we are going to pay attention in both primary and standby.

DB_NAME=PDB
DB_UNIQUE_NAME=PDB
LOG_ARCHIVE_CONFIG='DG_CONFIG=(PDB,DRDB)'
LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PDB'
LOG_ARCHIVE_DEST_2='SERVICE=DRDB ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DRDB'
LOG_ARCHIVE_DEST_STATE_1=ENABLE
LOG_ARCHIVE_DEST_STATE_2=ENABLE
REMOTE_LOGIN_PASSWORDFILE=EXCLUSIVE
LOG_ARCHIVE_FORMAT=%t_%s_%r.arc
LOG_ARCHIVE_MAX_PROCESSES=8
FAL_SERVER=DRDB
DB_FILE_NAME_CONVERT='DRDB','PDB'
LOG_FILE_NAME_CONVERT='DRDB','PDB'
STANDBY_FILE_MANAGEMENT=AUTO

The db_unique_name parameter has already been set to the appropriate value during the initial creation of the RAC database. The log_archive_dest_state_n and remote_login_passwordfile have default values set to ENABLE and EXCLUSIVE respectively. So, only below mentioned parameter needed to be changed here.

alter system set LOG_ARCHIVE_CONFIG='DG_CONFIG=(DRDB,PDB)' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PDB' scope=both sid='*';
alter system set LOG_ARCHIVE_DEST_2='SERVICE=DRDB ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=DRDB' scope=both sid='*';
alter system set log_archive_format='%t_%s_%r.arc' scope=spfile sid='*';
alter system set log_archive_max_processes=8 scope=both sid='*';
alter system set fal_server=DRDB scope=both sid='*';
alter system set db_file_name_convert='+DATA/DRDB','+DATA/PDB' scope=spfile sid='*';
alter system set log_file_name_convert='+FRA/DRDB','+FRA/PDB' scope=spfile sid='*';
alter system set standby_file_management=AUTO scope=both sid='*';

Verify that the values are set correctly for these parameters after bouncing the database.

set linesize 500 pages 0
col value for a80
col name for a50

select name, value from v$parameter
where name in ('db_name','db_unique_name','log_archive_config',    
'log_archive_dest_1','log_archive_dest_2',
               'log_archive_dest_state_1','log_archive_dest_state_2', 'remote_login_passwordfile',
               'log_archive_format','log_archive_max_processes','fal_server','db_file_name_convert',
              'log_file_name_convert', 'standby_file_management');

db_file_name_convert                               +DATA/DRDB, +DATA/PDB

log_file_name_convert                              +FRA/DRDB, +FRA/PDB,
+DATA/DRDB, +DATA/PDB
log_archive_dest_1                                 LOCATION=USE_DB_RECOVERY_FILE
_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PDB

log_archive_dest_2                                 service=drdb LGWR ASYNC
NOAFFIRM max_failure=10 max_connections=5 reopen=18
                                                   0 valid_for=(online_logfiles,primary_role) db_unique_name=drdb

log_archive_dest_state_1                                 enable
log_archive_dest_state_2                                 enable
fal_server                                                             DRDB1, DRDB2
log_archive_config                                            dg_config=(drdb,pdb)
log_archive_format                                           %t_%s_%r.arc
log_archive_max_processes                             10
standby_file_management                              AUTO
remote_login_passwordfile                             EXCLUSIVE
db_name                                                             pdb
db_unique_name                                               pdb

Enable Archivelog Mode:

If our primary is not yet configured as archivelog mode then put your database on mount stage then alter it and open.

srvctl stop database –d pdb
startup mount (only in one instance)
alter database archivelog
alter database open
srvctl start database –d pdb Ã  This will start the remaining Instances on the cluster.


There should be a minimum of (threads)*(groups Per Threads + 1) SLRs created on primary & standby databases. There are 2 threads with 3 groups per thread in this configuration on the primary side so there should be total of 9 SLRs at minimum needs to be created.

Let’s get the total Groups and Max size of the logfile from v$log.

SQL> select max (bytes), count (1) from v$log;

MAX(BYTES)   COUNT(1)
---------- ----------
  52428800         6

Here, the total no. of online redo logs are 6 and the maximum size of these groups is 50M. So, 3 Standby Redo Log groups per thread with size of 50M each should be created on Primary as well as standby database.

 Before start creating standby log better to create a separate path on ASM.

 ASMCMD> mkdir +fra/pdb/standbylog  Ã  (connected as grid user using asmcmd)

Now let’s start create standby logs on primary:

alter system set standby_file_management=manual scope=both sid='*';

alter database add standby logfile thread 1 group 7 '+fra/pdb/standbylog/standby_group_07.log' size 50M;
alter database add standby logfile thread 1 group 8 '+fra/pdb/standbylog/standby_group_08.log' size 50M;
alter database add standby logfile thread 1 group 9 '+fra/pdb/standbylog/standby_group_09.log' size 50M;
alter database add standby logfile thread 2 group 10 '+fra/pdb/standbylog/standby_group_10.log' size 50M;
alter database add standby logfile thread 2 group 11 '+fra/pdb/standbylog/standby_group_11.log' size 50M;
alter database add standby logfile thread 2 group 12 '+fra/pdb/standbylog/standby_group_12.log' size 50M;
alter database add standby logfile thread 3 group 13 '+fra/pdb/standbylog/standby_group_13.log' size 50M;
alter database add standby logfile thread 3 group 14 '+fra/pdb/standbylog/standby_group_14.log' size 50M;
alter database add standby logfile thread 3 group 15 '+fra/pdb/standbylog/standby_group_15.log' size 50M;

alter system set standby_file_management=auto scope=both sid='*';

Backup The Primary Database For Standby.

Take the backup of the primary database and the standby controlfile. Create the staging directory to hold the RMAN backup.

Create backup directory (E:\app\oracle\backup)

rman target / nocatalog
run
{
     sql "alter system switch logfile";
     allocate channel ch1 type disk format 'E:\app\oracle\Backup\Primary_bkp_for_stndby_%U';
     backup database;
     backup current controlfile for standby;
     sql "alter system archive log current";
}

Create pfile For Standby.
create pfile='E:\app\oracle\Backup\pfile_for_standby.txt' from spfile; 

Update TNSNAMES.ora

Add the tns alias for each of the instances of the primary database.
  
DRDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = drsrv-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = drdb.hamid.com)
      (UR = A)
    )
  )
DRDB2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = drsrv2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = drdb.hamid.com)
      (INSTANCE_NAME = drdb2)
      (UR = A)
    )
  )

DRDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = drsrv1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = drdb.hamid.com)
      (INSTANCE_NAME = drdb1)
      (UR = A)
    )
  )

PDB =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = psrv-scan)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb.feco.com)
    )
  )


PDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = psrv1-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb.feco.com)
      (INSTANCE_NAME = pdb1)
    )
  )


PDB2 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = psrv2-vip)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb.feco.com)
      (INSTANCE_NAME = pdb2)
    )
  )
Copy the tnsnames.ora on all the instances under $ORACLE_HOME/network/admin to keep the same tnsnames.ora on all the instances.

Prepare Standby Site for Dataguard Configuration:

     ·         Copy the RMAN backup and pfile_for_standby.txt.
     ·         Copy the password file.
     ·         Create required directories for Standby Instance.
     ·         Modify the pfile.
     ·         Copy the tnsnames.ora from Primary and add the TNS Alias for the standby database instances.
     ·         Create the ASM directories under the DATA and FRA diskgroup.

Copy the RMAN Backup files and init.ora from Primary to Standby: 

Copy the backups from primary PSRV1 to DRSRV1 under the same location as primary PSRV1 (E:\oracle\backup). Also copy the pfile that was created for standby on primary PSRV1 to the DRSRV1.

Copy the Password File from Primary to all the nodes in Standby. 

Copy the file $ORACLE_HOME\database\PWDpdb1 from Primary PSRV1 to both the standby nodes under the directory $ORACLE_HOME\database with the name of PWDdrdb1 and PWDdrdb2 for drsrv1 and drsrv2 respectively.

Create required dump file directories for the Standby Instances 

On drsrv1: 
mkdir  E:\app\oracle\admin\drdb\adump
mkdir E:\app\oracle\diag\rdbms\drdb\drdb1
cd E:\app\oracle\diag\rdbms\drdb\drdb1
mkdir trace cdump

On drsrv2: 
mkdir  E:\app\oracle\admin\drdb\adump
mkdir E:\app\oracle\diag\rdbms\drdb\drdb2
cd E:\app\oracle\diag\rdbms\drdb\drdb2
mkdir trace cdump

Modify pfile_for_standby.txt  file
The one’s in RED color are modified for the standby database. The rest of the parameters remain same on both the primary and standby.

*.audit_file_dest='E:\app\oracle\admin\drdb\adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+FRA/drdb/controlfile/control02.ctl'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+DATA'
*.db_create_online_log_dest_2='+FRA'
*.db_domain='hamid.com'
*.db_file_name_convert='+DATA/pdb','+DATA/drdb'
*.db_name='pdb'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=4039114752
*.db_unique_name='drdb'
*.diagnostic_dest='E:\app\oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=pdbXDB)'
*.fal_server='pdb'
*.fal_client='drdb'
drdb1.instance_number=1
drdb2.instance_number=2
*.log_archive_config='DG_CONFIG=(pdb,drdb)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=drdb'
*.log_archive_dest_2='SERVICE=drdb ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=drdb'
*.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=8
*.log_file_name_convert='+FRA/pdb','+FRA/drdb'
*.memory_target=1484783616
*.open_cursors=300
*.processes=1024
*.remote_listener='drsrv-scan.hamid.com:1521'
*.remote_login_passwordfile='exclusive'
*.standby_file_management='AUTO'
drdb2.thread=2
drdb1.thread=1
drdb1.undo_tablespace='UNDOTBS1'
drdb2.undo_tablespace='UNDOTBS2'

Copy the modified tnsnames.ora from pdb1 to all standby instances.
Now, Modify the LOCAL_LISTENER parameters in the endpoint listener to reflect the DR specific vip host names (On the Standby Nodes only). 
LISTENER_DRDB1=(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = drsrv1-vip.hamid.com)(PORT = 1521)))
LISTENER_DRDB2=(DESCRIPTION =(ADDRESS = (PROTOCOL = TCP)(HOST = drsrv2-vip. hamid.com)(PORT = 1521)))

Create the ASM directories
Connect to the asmcmd as a grid Operating System user and create the below directories. 

Set ORACLE_HOME=E:\app\11.2.0\grid
Set ORACLE_SID=+ASM1
ASMCMD> mkdir data/DRSRV
ASMCMD> cd data/DRDB
ASMCMD> mkdir PARAMETERFILE DATAFILE TEMPFILE

ASMCMD> mkdir fra/DRSRV
ASMCMD> cd fra/DRDB
ASMCMD> mkdir ARCHIVELOG CONTROLFILE ONLINELOG STANDBYLOG

Create the Physical Standby Database

     ·         Start the Instance in NOMOUNT using the pfile_for_standby.txt.
     ·         Check the accessibility of all nodes.
     ·         Restore the Standby Database using RMAN DUPLICATE command.
     ·         Start the Managed Recovery Process
     ·         Create the spfile and start the database on both the nodes using spfile.
     ·         Register the New Standby database to the OCR.

Start the Instance in NOMOUNT state

Create the instance in form of service first by ORADIM for windows platform only, then set your environment and start the Instance DRDB1 using the E:\app\oracle\backup\pfile_for_standby.txt parameter file.

ORADIM –new –SID DRDB1             #on drsrv1
ORADIM –new –SID DRDB2             #on drsrv2

set ORACLE_SID=drdb1
sqlplus / as sysdba
startup nomount pfile=E:\app\oracle\backup\pfile_for_standby.txt

Check the accessibility of all nodes

Check the accessibility of all nodes from each node on both primary and standby servers.
Note:
Don’t go further unless you’re able to access all nodes. Otherwise double check your listeners and services using lsnrctl status.

Restore the RMAN backup using DUPLICATE DATABASE option:

After you copied the backup on the same path as primary is, Connect to the Target database (PDB) and auxiliary instance (DRDB) from drsrv1 host to start the DUPLICATE.
rman target /@pdb1 auxiliary /
DUPLICATE TARGET DATABASE FOR STANDBY NOFILENAMECHECK;

Start the Managed Recovery Process:

Start the Managed recovery process on drdb1 and verify that the log transport and log application is happening. Alert log is a quick and easy way to see if things log transport/Gap resolution and log application is working as expected.

ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;

Perform the log switch on the primary database so initiate the log Transport.
alter system switch logfile;

To check the gap information:
select * from v$archive_gap;
select process, client_process, sequence#, status from v$managed_standby;
select sequence#, first_time, next_time, applied from v$archived_log;
select archived_thread#, archived_seq#, applied_thread#, applied_seq# from v$archive_dest_status;
select thread#, max (sequence#) from v$log_history group by thread#;
select thread#, max (sequence#) from v$archived_log where APPLIED='YES' group by thread#;

Create spfile from pfile:

Create your spfile from pfile to be seen from other nodes on the cluster and then restart the current node to run with a newly created spfile.

create spfile='+data/drdb/parameterfile/spfileDRDB.ora' from pfile=' E:\app\oracle\backup\pfile_for_standby.txt';

Add Standby database and Instances to the OCR:

Add the standby database and its instances to the OCR so that it is managed by CRS.
srvctl add database -d drdb -n pdb –o E:\app\oracle\product\11.2.0\dbhome_1 -m hamid.com -p +data/drdb/parameterfile/spfileDRDB.ora -r physical_standby –a DATA,FRA
srvctl add instance -d drdb -i drdb1 -n drsrv1
srvctl add instance -d drdb -i drdb2 -n drsrv2
srvctl start database -d drdb –o mount
srvctl modify instance –d drdb –i drdb1 –s +ASM1
srvctl modify instance –d drdb –i drdb2 –s +ASM2
srvctl enable asm -n drdb1 -i +ASM1
srvctl enable asm -n drdb2 -i +ASM2

Verify the configuration of standby database drdb.

srvctl config database -d drdb

Create the Standby Redo Logs (SRLs) on Standby:

DUPLICATE DATABASE command has replicated the same no. of Online Redo Logs and Standby Redologs from primary database to the Standby database. So, they were not needed to create here.

Configure Dataguard Broker:

      ·  Stop the MRP
      ·  Modify the Listener.ora files
      ·  Modify the init Parameters
      ·  Create Configuration
      ·  Enable Configuration
      · Verify the Configuration

Stop the MRP on standby database:

alter database recover managed standby database cancel;

Modify the listener.ora files:

Update the listener.ora on all the nodes in Dataguard configuration to register a service with the local listener for each instance for the proper functioning of DGMGRL.
Add the below lines to the listener.ora file for each of the specified instances.
Primary Node1
SID_LIST_PDB_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=PDB1)(GLOBAL_DBNAME=PDB_DGMGRL.FECO.COM)(ORACLE_HOME= E:\app\oracle\product\11.2.0\dbhome_1)))
Primary Node2
SID_LIST_PDB_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=PDB2)(GLOBAL_DBNAME=PDB_DGMGRL.FECO.COM)(ORACLE_HOME= E:\app\oracle\product\11.2.0\dbhome_1)))
Standby Node1
SID_LIST_PDB_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=DRDB1)(GLOBAL_DBNAME=DRDB_DGMGRL.HAMID.COM)(ORACLE_HOME= E:\app\oracle\product\11.2.0\dbhome_1)))
Standby Node2
SID_LIST_PDB_LISTENER=(SID_LIST=(SID_DESC=(SID_NAME=DRDB2)(GLOBAL_DBNAME=DRDB_DGMGRL.HAMID.COM)(ORACLE_HOME= E:\app\oracle\product\11.2.0\dbhome_1)))

After updating these listener.ora files, restart listeners on each of these nodes.

Modify the init parameter files:

Standby:
mkdir +fra/drdb/DATAGUARDCONFIG
mkdir +data/drdb/DATAGUARDCONFIG
ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1='+fra/drdb/DATAGUARDCONFIG/dgb_config02.ora' SCOPE=BOTH sid='*';
ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2='+data/drdb/DATAGUARDCONFIG/dgb_config01.ora' SCOPE=BOTH sid='*';
alter system set dg_broker_start=true scope=both sid='*';

Primary:
mkdir +fra/pdb/DATAGUARDCONFIG
mkdir +data/pdb/DATAGUARDCONFIG
ALTER SYSTEM SET DG_BROKER_CONFIG_FILE1='+fra/pdb/DATAGUARDCONFIG/dgb_config02.ora' SCOPE=BOTH sid='*';
ALTER SYSTEM SET DG_BROKER_CONFIG_FILE2='+data/pdb/DATAGUARDCONFIG/dgb_config01.ora' SCOPE=BOTH sid='*';
alter system set dg_broker_start=true scope=both sid='*';

Create Configuration

Connect to the DGMGRL from any of primary nodes and run the below statements to create and enable the DataGuard configuration. Verify that the DG Broker is configured correctly and it has all the databases and instances registered as expected.
CREATE CONFIGURATION 'DG_Config' AS PRIMARY DATABASE IS 'pdb' CONNECT IDENTIFIER IS 'pdb';
ADD DATABASE 'drdb' AS CONNECT IDENTIFIER IS drdb maintained as physical;
enable configuration;

Verify Configuration

DGMGRL> show configuration

Configuration - dg_config

  Protection Mode: MaxPerformance
  Databases:
    pdb - Primary database
    drdb    - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

Perform Switchover using DG Broker

DGMGRL> switchover to drdb;
Performing switchover NOW, please wait...
New primary database "drdb" is opening...
Operation requires shutdown of instance "pdb1" on database "pdb"
Shutting down instance "pdb1"...
ORACLE instance shut down.
Operation requires startup of instance "pdb1" on database "pdb"
Starting instance "pdb1"...
ORACLE instance started.
Database mounted.
Switchover succeeded, new primary is "drdb"
DGMGRL> show configuration

Configuration - dg_config

  Protection Mode: MaxPerformance
  Databases:
    drdb - Primary database
    pdb    - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

To check your database role status:

Select database_role from v$database;


Appendix A


Pfile from pdb database:
pdb2.__db_cache_size=12884901888
pdb1.__db_cache_size=10670309376
pdb2.__java_pool_size=67108864
pdb1.__java_pool_size=67108864
pdb2.__large_pool_size=67108864
pdb1.__large_pool_size=134217728
pdb1.__oracle_base='E:\app\oracle'#ORACLE_BASE set from environment
pdb2.__oracle_base='E:\app\oracle'#ORACLE_BASE set from environment
pdb2.__pga_aggregate_target=11005853696
pdb1.__pga_aggregate_target=11005853696
pdb2.__sga_target=16508780544
pdb1.__sga_target=16508780544
pdb2.__shared_io_pool_size=0
pdb1.__shared_io_pool_size=0
pdb2.__shared_pool_size=3355443200
pdb1.__shared_pool_size=5502926848
pdb2.__streams_pool_size=0
pdb1.__streams_pool_size=0
*.archive_lag_target=0
*.audit_file_dest='E:\app\oracle\admin\pdb\adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+FRA/pdb/controlfile/current.256.819916279'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+FRA'
*.db_domain='feco.com'
*.db_file_name_convert='+DATA/DRDB','+DATA/PDB'
*.db_name='pdb'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=1073490165760
*.dg_broker_config_file1='+fra/pdb/DATAGUARDCONFIG/dgb_config02.ora'
*.dg_broker_config_file2='+data/pdb/DATAGUARDCONFIG/dgb_config01.ora'
*.dg_broker_start=TRUE
*.diagnostic_dest='E:\app\oracle'
*.dispatchers='(protocol=TCP)'
*.fal_client='PDB'
PDB1.fal_client='PDB1'
PDB2.fal_client='PDB2'
*.fal_server='DRDB1','DRDB2'
pdb2.instance_number=2
pdb1.instance_number=1
*.log_archive_config='dg_config=(drdb,pdb)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=PDB'
*.log_archive_dest_2='service=drdb LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=drdb'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.arc'
pdb1.log_archive_format='%t_%s_%r.arc'
pdb2.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=10
*.log_archive_min_succeed_dest=1
pdb1.log_archive_trace=0
pdb2.log_archive_trace=0
*.log_file_name_convert='+FRA/DRDB','+FRA/PDB','+DATA/DRDB','+DATA/PDB'
*.memory_target=27483176960
*.open_cursors=300
*.processes=3000
*.remote_listener='psrv-scan.Feco.com:1521'
*.remote_login_passwordfile='exclusive'
*.sessions=3305
*.shared_servers=10
*.standby_file_management='AUTO'
pdb2.thread=2
pdb1.thread=1
pdb2.undo_tablespace='UNDOTBS2'
pdb1.undo_tablespace='UNDOTBS1'




DRDB pfile:

drdb1.__db_cache_size=11207180288
drdb2.__db_cache_size=12884901888
drdb1.__java_pool_size=67108864
drdb2.__java_pool_size=67108864
drdb1.__large_pool_size=134217728
drdb2.__large_pool_size=67108864
drdb1.__oracle_base='E:\app\oracle'#ORACLE_BASE set from environment
drdb2.__oracle_base='E:\app\oracle'#ORACLE_BASE set from environment
drdb1.__pga_aggregate_target=11005853696
drdb2.__pga_aggregate_target=11005853696
drdb1.__sga_target=16508780544
drdb2.__sga_target=16508780544
drdb1.__shared_io_pool_size=0
drdb2.__shared_io_pool_size=0
drdb1.__shared_pool_size=4966055936
drdb2.__shared_pool_size=3355443200
drdb1.__streams_pool_size=0
drdb2.__streams_pool_size=0
*.archive_lag_target=0
*.audit_file_dest='E:\app\oracle\admin\drdb\adump'
*.audit_trail='db'
*.cluster_database=true
*.compatible='11.2.0.0.0'
*.control_files='+FRA/drdb/controlfile/control01.ctl'#Restore Controlfile
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+FRA'
*.db_domain='hamid.com'
*.db_file_name_convert='+DATA/pdb','+DATA/drdb'
*.db_name='pdb'
*.db_recovery_file_dest='+FRA'
*.db_recovery_file_dest_size=1073490165760
*.db_unique_name='drdb'
*.dg_broker_config_file1='+fra/drdb/DATAGUARDCONFIG/dgb_config02.ora'
*.dg_broker_config_file2='+data/drdb/DATAGUARDCONFIG/dgb_config01.ora'
*.dg_broker_start=TRUE
*.diagnostic_dest='E:\app\oracle'
*.dispatchers='(protocol=TCP)'
*.fal_client='drdb'
*.fal_server='pdb'
drdb2.instance_number=2
drdb1.instance_number=1
*.log_archive_config='dg_config=(drdb,pdb)'
*.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST VALID_FOR=(ALL_LOGFILES,ALL_ROLES) DB_UNIQUE_NAME=DRDB'
*.log_archive_dest_2='SERVICE=pdb ARCH VALID_FOR=(ONLINE_LOGFILES,PRIMARY_ROLE) DB_UNIQUE_NAME=PDB'
*.log_archive_format='%t_%s_%r.arc'
drdb2.log_archive_format='%t_%s_%r.arc'
drdb1.log_archive_format='%t_%s_%r.arc'
*.log_archive_max_processes=8
*.log_archive_min_succeed_dest=1
drdb2.log_archive_trace=0
drdb1.log_archive_trace=0
*.log_file_name_convert='+FRA/pdb/','+FRA/drdb/'
*.memory_target=27483176960
*.open_cursors=300
*.processes=3000
*.remote_listener='drsrv-scan.hamid.com:1521','drsrv1-vip.hamid.com:1521'
drdb2.remote_listener='drsrv2-vip.hamid.com:1521'
*.remote_login_passwordfile='exclusive'
*.sessions=3305
*.shared_servers=10
*.standby_file_management='AUTO'
drdb2.thread=2
drdb1.thread=1
drdb2.undo_tablespace='UNDOTBS2'
drdb1.undo_tablespace='UNDOTBS1'