Wednesday, December 19, 2018

Udev Rules for AWS EBS Volumes

AWS EBS volumes could be used as ASM disks. When ASMLib or AFD is not used, udev rules must be used to set the correct permissions on the EBS block volumes. It's generally good practice to use a unique identifier (UUID) to identify the partitions that require permission being set. It's not a guarantee that partitions or the blocks will be attached to the server in the same order and getting the same name.
Depending on the tool/option used a unique identifier may not be always available. For example the default dos partition table type created using fdisk would not generate any unique ID.
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdb: 268GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number  Start   End    Size   Type     File system  Flags
 1      1049kB  268GB  268GB  primary  ext4

udevadm info --query=property /dev/xvdd1
DEVNAME=/dev/xvdd1
DEVPATH=/devices/vbd-51760/block/xvdd/xvdd1
DEVTYPE=partition
ID_PART_ENTRY_DISK=202:48
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=dos
ID_PART_ENTRY_SIZE=209713152
ID_PART_ENTRY_TYPE=0x83
ID_PART_TABLE_TYPE=dos
MAJOR=202
MINOR=49
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=634695673
On the other-hand if the partition table type was GPT, this could generate several unique IDs which could be used in the udev rule to identify the partition.
fdisk /dev/sdd

Command (m for help): g
Building a new GPT disklabel (GUID: EC7F1589-8BD2-4C94-8F8F-D22013D40406)


Command (m for help): n
Partition number (1-128, default 1):
First sector (2048-20971486, default 2048):
Last sector, +sectors or +size{K,M,G,T,P} (2048-20971486, default 20971486):
Created partition 1


Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

 fdisk -l /dev/sdd
WARNING: fdisk GPT support is currently new, and therefore in an experimental phase. Use at your own discretion.

Disk /dev/sdd: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: gpt
Disk identifier: EC7F1589-8BD2-4C94-8F8F-D22013D40406


#         Start          End    Size  Type            Name
 1         2048     20971486     10G  Linux filesyste

udevadm info --query=property /dev/sdd1
...
ID_PART_ENTRY_DISK=8:48
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=gpt
ID_PART_ENTRY_SIZE=20969439
ID_PART_ENTRY_TYPE=0fc63daf-8483-4772-8e79-3d69d8477de4
ID_PART_ENTRY_UUID=573dded4-21f1-48ce-925f-e02c5d94dace
ID_PART_TABLE_TYPE=gpt
ID_PATH=pci-0000:00:0d.0-ata-4.0
ID_PATH_TAG=pci-0000_00_0d_0-ata-4_0
...
SUBSYSTEM=block
TAGS=:systemd:
The partition UUID could be found out using blkid as well.
blkid /dev/sdd1
/dev/sdd1: PARTLABEL="data" PARTUUID="573dded4-21f1-48ce-925f-e02c5d94dace"
The ID_PART_ENTRY_UUID could be used in the udev rule.
KERNEL=="sd?1",ENV{ID_PART_ENTRY_UUID}=="573dded4-21f1-48ce-925f-e02c5d94dace", SYMLINK+="oracleasm/cgdata1", OWNER="oracle", GROUP="asmadmin", MODE="0660"

Similar to fdisk with g (gpt) option, parted could be used to achieve the same.
parted /dev/xvdd
GNU Parted 3.1
Using /dev/xvdd
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mktable gpt
Warning: The existing disk label on /dev/xvdd will be destroyed and all data on this disk will be lost. Do you want to continue?
Yes/No? yes
(parted) mkpart data 0% 100%

(parted) print all
Model: Xen Virtual Block Device (xvd)
Disk /dev/xvdd: 107GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End    Size   File system  Name  Flags
 1      1049kB  107GB  107GB               data

udevadm info --query=property /dev/xvdd1
DEVLINKS=/dev/disk/by-partlabel/data /dev/disk/by-partuuid/5a081ffa-56e2-467b-82a9-16e3a4f441bd
DEVNAME=/dev/xvdd1
DEVPATH=/devices/vbd-51760/block/xvdd/xvdd1
DEVTYPE=partition
ID_PART_ENTRY_DISK=202:48
ID_PART_ENTRY_NAME=data
ID_PART_ENTRY_NUMBER=1
ID_PART_ENTRY_OFFSET=2048
ID_PART_ENTRY_SCHEME=gpt
ID_PART_ENTRY_SIZE=209711104
ID_PART_ENTRY_TYPE=ebd0a0a2-b9e5-4433-87c0-68b6b72699c7
ID_PART_ENTRY_UUID=5a081ffa-56e2-467b-82a9-16e3a4f441bd
ID_PART_TABLE_TYPE=gpt
MAJOR=202
MINOR=49
SUBSYSTEM=block
TAGS=:systemd:
USEC_INITIALIZED=685327395

Monday, December 3, 2018

AWR Reports on Standby when Active Data Guard is Used

Oracle introduced Remote Management Framework (RMF) in 12.2 which allows creating AWR reports on standby database when active data guard is in use. This post list the steps for setting up the RMF so AWR reports could be generated on standby. The post use the data guard configuration set up on 18.3, which is mentioned in a previous post. Current data guard setup and standby open mode is as follows.
DGMGRL> show configuration

Configuration - paas_iaas_dg

  Protection Mode: MaxAvailability
  Members:
  fradb_fra1kk - Primary database
    londb        - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 11 seconds ago)

DGMGRL> show database londb

Database - londb

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 1.00 KByte/s
  Real Time Query:    ON
  Instance(s):
    lonDB

Database Status:
SUCCESS
Oracle provide a pre-created user sys$umf (locked by default) which has all the necessary privileges to carry out the RMF related work. Unlock this user from primary DB and set a password.
alter user sys$umf identified by rmfuser account unlock;
Next create two database link that sys$umf user will use to connect to and from the standby DB. The TNS entries used here were created in the previous post. Before creating database link take a note of the global_names parameter. If this is set to true, then DB links are expected to be the same name as the DB they connect to. If not ORA-02085 error could be encountered when trying to use the DB links.
create database link fra_to_lon CONNECT TO sys$umf IDENTIFIED BY rmfuser  using 'LONDBTNS';
create database link lon_to_fra CONNECT TO sys$umf IDENTIFIED BY rmfuser  using 'FRADBTNS';
Check the DB links are working by querying remote instance using them. Run both on primary and standby.
SQL> select instance_name from v$instance@fra_to_lon;

INSTANCE_NAME
----------------
lonDB

SQL>  select instance_name from v$instance@lon_to_fra;

INSTANCE_NAME
----------------
fraDB


On primary run the following to configure the primary node with RMF. Configure node require a unique name for each node configured. If none is provided, the db_unique_name will be used instead.
SQL> exec dbms_umf.configure_node ('fraDB');

PL/SQL procedure successfully completed.
On standby run the following to configure the standby with RMF. In this case a unique name for standby and the db link name from standby to primary is given as inputs.
SQL> exec dbms_umf.configure_node ('lonDB','lon_to_fra');

PL/SQL procedure successfully completed.
Create the RMF topology by running following on primary.
SQL> exec DBMS_UMF.create_topology ('FRA_LON_TOPOLOGY');

PL/SQL procedure successfully completed.

SQL> select * from dba_umf_topology;

TOPOLOGY_NAME         TARGET_ID TOPOLOGY_VERSION TOPOLOGY
-------------------- ---------- ---------------- --------
FRA_LON_TOPOLOGY     1423735874                1 ACTIVE
View the registered nodes. Only primary is registered so far.
SQL> select * from dba_umf_registration;

TOPOLOGY_NAME        NODE_NAME     NODE_ID  NODE_TYPE AS_SO AS_CA STATE
-------------------- ---------- ---------- ---------- ----- ----- --------------------
FRA_LON_TOPOLOGY     fraDB      1423735874          0 FALSE FALSE OK
Register the standby with the topology. The meaning of the input parameters could be found here. Execute the following on primary.
exec DBMS_UMF.register_node ('FRA_LON_TOPOLOGY', 'lonDB', 'fra_to_lon', 'lon_to_fra', 'FALSE', 'FALSE');
Check both nodes are registered.
SQL> select * from dba_umf_registration;

TOPOLOGY_NAME        NODE_NAME     NODE_ID  NODE_TYPE AS_SO AS_CA STATE
-------------------- ---------- ---------- ---------- ----- ----- -----
FRA_LON_TOPOLOGY     fraDB      1423735874          0 FALSE FALSE OK
FRA_LON_TOPOLOGY     lonDB      4041047630          0 FALSE FALSE OK
Register the AWR service on the remote node.
SQL> exec DBMS_WORKLOAD_REPOSITORY.register_remote_database(node_name=>'lonDB');

PL/SQL procedure successfully completed.
Verify AWR service is active on the remote node
SQL> select * from dba_umf_service;

TOPOLOGY_NAME           NODE_ID SERVICE
-------------------- ---------- -------
FRA_LON_TOPOLOGY     4041047630 AWR


To generate AWR report create two snapshots on the remote database.
SQL> exec dbms_workload_repository.create_remote_snapshot('lonDB');

PL/SQL procedure successfully completed.
Once a snapshot is created the standby DB is listed in the AWR with the DB ID that is equal to the node ID (highlighted above) in the UMF registration.
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  DB Id      Inst Num   DB Name      Instance     Host
------------ ---------- ---------    ----------   ------
  1042410484     1      FRADB        lonDB        lonvm
  4041047630     1      FRADB        lonDB        lonvm
* 1042410484     1      FRADB        fraDB        fravm

Enter value for dbid: 4041047630
The AWR instance report generated would list the role as physical standby.
The AWR control view list the standby database as well.
SQL> select * from dba_hist_wr_control;

      DBID SNAP_INTERVAL        RETENTION            TOPNSQL        CON_ID   SRC_DBID SRC_DBNAME
---------- -------------------- -------------------- ---------- ---------- ---------- ----------
1042410484 +00000 01:00:00.0    +00008 00:00:00.0    DEFAULT             0 1042410484
4041047630 +00000 01:00:00.0    +00008 00:00:00.0    DEFAULT               1042410484 lonDB
The remote snapshots will be automatically taken according to snapshot internal.
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  DB Id      Inst Num   DB Name      Instance     Host
------------ ---------- ---------    ----------   ------
  1042410484     1      FRADB        lonDB        lonvm
  4041047630     1      FRADB        lonDB        lonvm
* 1042410484     1      FRADB        fraDB        fravm

Enter value for dbid: 4041047630
Using 4041047630 for database Id
Enter value for inst_num: 1
Using 1 for instance number


Specify the number of days of snapshots to choose from
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Entering the number of days (n) will result in the most recent
(n) days of snapshots being listed.  Pressing  without
specifying a number lists all completed snapshots.


Enter value for num_days: 1

Listing the last day's Completed Snapshots
Instance     DB Name      Snap Id       Snap Started    Snap Level
------------ ------------ ---------- ------------------ ----------

lonDB        FRADB                1  29 Nov 2018 13:11    1
                                  2  29 Nov 2018 13:59    1
                                  3  29 Nov 2018 14:58    1
                                  4  29 Nov 2018 15:58    1


However, after a switchover, when roles changes the automatic snapshot taking will stop, both on new primary (old standby) and new standby (old primary). In order to automatic AWR snapshot to continue do the following after a switchover. As the first step unregistered the new primary (old standby) as a remote database.
SQL> exec DBMS_WORKLOAD_REPOSITORY.UNREGISTER_REMOTE_DATABASE('lonDB','FRA_LON_TOPOLOGY',false);

PL/SQL procedure successfully completed.
Drop the topology
exec DBMS_UMF.drop_topology('FRA_LON_TOPOLOGY');

PL/SQL procedure successfully completed.
Run un-configure procedure on each node
SQL>  exec DBMS_UMF.UNCONFIGURE_NODE;

PL/SQL procedure successfully completed.
Re-create the topology again with new primary and standby. On new primary
SQL> exec dbms_umf.configure_node ('lonDB');

PL/SQL procedure successfully completed.
On new standby
SQL> exec dbms_umf.configure_node ('fraDB','fra_to_lon');

PL/SQL procedure successfully completed.
Same topology name is used
exec DBMS_UMF.create_topology ('FRA_LON_TOPOLOGY');
Register new standby
exec DBMS_UMF.register_node ('FRA_LON_TOPOLOGY', 'fraDB', 'lon_to_fra', 'fra_to_lon', 'FALSE', 'FALSE');
Register remote DB for AWR by specifying the node ID of the DB shown in dba_umf_registration earlier. If the remote DB registration was to be done using DB name as before then "ORA-13516: AWR Operation failed: ORA-13516: AWR Operation failed: Remote source not registered for AWR" would be encountered when snapshots are taken. Doing log switches as suggested by 2409808.1 did not resolve this issue.
exec DBMS_WORKLOAD_REPOSITORY.register_remote_database(1423735874);

PL/SQL procedure successfully completed.
Once remote DB is registered remote snapshots could be taken
SQL> exec dbms_workload_repository.create_remote_snapshot('fraDB');

PL/SQL procedure successfully completed.
After the role reversal, automatic snapshots will continue on the snapshot interval. Run AWR instance reports same as before.
Instances in this Workload Repository schema
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  DB Id      Inst Num   DB Name      Instance     Host
------------ ---------- ---------    ----------   ------
  1042410484     1      FRADB        fraDB        fravm
  1423735874     1      FRADB        fraDB        fravm
* 1042410484     1      FRADB        lonDB        lonvm

Enter value for dbid: 1423735874
The new primary (old standby) will be reflected on the AWR report taken against it.


Useful Metalink Note
How to Generate AWRs in Active Data Guard Standby Databases [ID 2409808.1]

Related Posts
Enabling Automatic AWR Snapshots on PDB

Update on 2020-07-13
Oracle documentation now has a separate section on "Managing ADG Role Transition". The doc shows the use of DBMS_UMF.SWITCH_DESTINATION for role reversal scenarios.

Monday, November 12, 2018

Data Guard With OCI PaaS DB (DBaaS) Primary and Standby on IaaS

Oracle Cloud Infrastructure (OCI) platform as a service (PaaS) DB offering (DBaaS) currently does not provide data guard capability for DBs created with VM shape types. This post shows how to create a data guard configuration with the PaaS DB as primary and a standby created on IaaS resources (compute instances + block storages).
The post only looks at the technical aspects of such a configuration. There's no consideration given to the licensing and supporting implication such a configuration would have. For example on PaaS the user tablespaces are by default encrypted (even for SE2) and backup uses compression. On the standby site, these additional options would need to be licensed. On the support side, in a PaaS setup as the provider of the service Oracle is expected provide a fully managed service. Since data guard is not supported as a PaaS offering, any support on the data guard needs to be acquired through traditional Oracle DB support channel not through cloud support channels. These are just the few of the concerns, there could be many more.
Having said that, below diagram shows how the setup would look like. Only the major components are shown or mentioned in the diagram.
Two regions from the OCI regions were chosen to act as primary region and standby region. In this case Frankfurt region was chosen as primary and London region was chosen as standby. The main reason for choosing these two regions is that currently remote peering is allowed between these two region (other two region where RPC is allowed is Ashburn and Phoenix).
London region would contain the IaaS components to create the standby DB instance's host. This include a compute instance and several block storage volumes attached to the compute instances. One of the block would be used as the mount point for installing Oracle binaries. Hence it is formatted and mounted on to the compute instance. Other two block storage volumes are used as block devices for ASM. The ASM disk groups will have the same names as the asm disk group on the PaaS DB, to keep file name conversions simple. The VCN on the London region has the CIDR 10.0.0.0/16, this must be different to the CIDR block in the PaaS DB's VCN. If not remote peering will not be possible. In this case the VCN was created as a public network so there's easy ssh connection to the DB servers. But for production it is best practice to keep DB servers in a private network. The setup on IaaS of the standby node is as below.
Once the infrastructure for standby node is in-place, install the Oracle binaries. For this setup 18.3 was chosen as the GI and DB version. Install 18.3 GI as Oracle restart. As mentioned earlier name the ASM disk group as DATA and RECO to keep file name conversions simple. Install the database software only.
Next create the PaaS DB in the Frankfurt region. For this a VCN was created with CIDR 172.21.0.0/16. The DB created was a virtual machine DB with a CDB and a PDB. Moreover auto backup was enabled for the DB. With auto backup enabled there is no need to manually setup any backup script. Oracle will carry out full and incremental backups.
When first created the 18c version of the DB is 18.1. To keep both sites DB versions same, patch the PaaS DB with Jul 2018 18c Database patch.
Primary site DB configuration is shown below.
Oracle Database 18c Enterprise Edition Release 18.0.0.0.0 - Production
Version 18.3.0.0.0

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 FRAPDB                         READ ONLY  NO
Next step is to establish connectivity between the two VCNs in the two regions. This is accomplished by creating a remote peering connection (RPC) between the two VCNs. To create a RPC first create a dynamic routing gateway (DRG) in each region and attach it to the VCN used by the DB hosts. Afterwards create a RPC and establish a connection between the two VCNs and wait until status to change to "PEERED".
With RPC in place next step is to add routing rules so that traffic flows from one region to another. On the Frankfurt (FRA) region add a routing rule such that any traffic that points to a 10.0.0.0/16 address is routed to the DRG. Same way on the London (LON) region add a routing rule so any traffic for 172.21.0.0/16 is pointed to its DRG. Below image shows these two rules in-place. In addition to the DRG related two rule there's internet gateway rule to allow direct access to the DB servers. If the DB servers were in a private network this rule is not necessary.
Add security list rules allowing traffic coming from remote region to DB port 1521. On the LON VNC's security list add an entry for 172.21.0.0/16 and port 1521 and on FRA VCN's security list add an entry for 10.0.0.0/16 and port 1521.
Apart from these rules on security list, following iptable rule must be added to the standby compute node on the LON region(IaaS) for traffic to flow.
iptables -I INPUT -p tcp -s 172.21.0.0/16 --dport 1521 -j ACCEPT
firewall-cmd --runtime-to-permanent

iptables -L INPUT  2 -n --line-numbers
2    ACCEPT     tcp  --  172.21.0.0/16        0.0.0.0/0            tcp dpt:1521
At this point it is also better to add the hostname of each node to the other node's /etc/hosts file.
Create a TNS entry on the standby Oracle home's tnsnames.ora file and establishes a remote connection to the primary DB. If there are any network connectivity issues resolve them before next steps.



Creating a data guard configuration is similar to earlier post of creating a data guard with CDB. However, there are few points that are specific to cloud which must be taken into consideration.
PaaS DB use encryption by default for all user tablespaces. Therefore, in order to standby to use these encrypted tablespaces copy the key store files (*wallet.*) into standby. The location of the wallet files on standby is added to the sqlnet.ora file in the standby.
ENCRYPTION_WALLET_LOCATION=(SOURCE=(METHOD=FILE)(METHOD_DATA=(DIRECTORY=/opt/app/oracle/product/18.0.0/dbhome_1/network/admin/tde)))
PaaS DB's sqlnet.ora file also has several other encryption related entries that affect client/sever communication. Copy these entries to standby's sqlnet.ora file.
SQLNET.ENCRYPTION_SERVER=REQUIRED
SQLNET.CRYPTO_CHECKSUM_SERVER=REQUIRED
SQLNET.ENCRYPTION_TYPES_SERVER=(AES256,AES192,AES128)
SQLNET.CRYPTO_CHECKSUM_TYPES_SERVER=(SHA1)
SQLNET.ENCRYPTION_CLIENT=REQUIRED
SQLNET.CRYPTO_CHECKSUM_CLIENT=REQUIRED
SQLNET.ENCRYPTION_TYPES_CLIENT=(AES256,AES192,AES128)
SQLNET.CRYPTO_CHECKSUM_TYPES_CLIENT=(SHA1)
With encryption related work done, the standby DB creation could begin. Only the steps that differ or require additional consideration is mentioned below.
The PaaS DB has force logging enabled by default. Verify and if not for some reason it is not enabled, run command to enable force logging.
Paas DB does not have flashback option enabled. If needed (not a must for DG but good to have) enable flashback on primary.
Add required number of standby log files on the primary DB.
The PaaS DB has following entries related to DB name and domain. Key here is that db_name is different to db_unique_name.
*.db_domain='frasubnet.fravcn.oraclevcn.com'
*.db_name='fraDB'
*.db_unique_name='fraDB_fra1kk'
Create static listener entries for primary DB as below.
SID_LIST_LISTENER =
(SID_LIST =
        (SID_DESC =
                (GLOBAL_DBNAME = fraDB_fra1kk.frasubnet.fravcn.oraclevcn.com)
                (SID_NAME = fraDB)
                (ORACLE_HOME = /u01/app/oracle/product/18.2/dbhome_1)
        )
)
Create static listener entries for standby as below. The standby DB is called lonDB.
SID_LIST_LISTENER =
(SID_LIST =
        (SID_DESC =
                (GLOBAL_DBNAME = lonDB.lonsubnet.lonvcn.oraclevcn.com)
                (SID_NAME = lonDB)
                (ORACLE_HOME = /opt/app/oracle/product/18.0.0/dbhome_1)
        )
)
There's no need to create DGMGRL related static listener entries any more if Oracle clusterware is used to managed the DB.
Create TNS entries for both standby and primary in each other's tnsnames.ora files. TNS entry for connecting to primary
FRADBTNS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = fravm)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = fraDB_fra1kk.frasubnet.fravcn.oraclevcn.com)
    )
  )
and TNS entry for connecting to standby
LONDBTNS =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = lonvm)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = lonDB.lonsubnet.lonvcn.oraclevcn.com)
    )
  )
At the same time add a local listener entry on the standby.
LISTENER_LONDB =
  (ADDRESS = (PROTOCOL = TCP)(HOST = lonvm)(PORT = 1521))
Add and modify data guard related initialization parameters on the primary DB. Restart the primary DB for changes to take effect. Whenever there's reference to the primary DB it's the db_unique_name of the primary DB that must be used as it's different to db_name in the PaaS DB.
alter system set log_archive_config='dg_config=(fraDB_fra1kk,lonDB)' scope=both ;
alter system set log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=fraDB_fra1kk' scope=both;
alter system set log_archive_dest_2='service=LONDBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=lonDB' scope=both;
alter system set log_archive_dest_state_2='enable' scope=both;
alter system set log_archive_dest_state_1='enable' scope=both;
alter system set fal_server='LONDBTNS' scope=both;
alter system set log_archive_max_processes=10 scope=both;
alter system set db_file_name_convert='/londb/','/fraDB_fra1kk/' scope=spfile;
alter system set log_file_name_convert='/londb/','/fraDB_fra1kk/' scope=spfile;
alter system set standby_file_management='AUTO' scope=both;
Once the primary DB is restarted create a pfile from the spfile. Copy the pfile to standby home and modify the following entries to reflect the standby DB configuration. Only the values that are related to DG configuration is shown in the table. There were several hidden parameters set on the primary DB which were left on the standby pfile as well. The full list of pfile entries of primary and standby are given at the end of the post. One the parameters related to cloud is encrypt_new_tablespaces parameter. Default value for this is CLOUD_ONLY and as said earlier all user tablespaces created on PaaS DB are encrypted by default. The same behaviour must be reflected on the standby side as well, even when standby becomes a primary. For this reason on the standby this parameter is set to ALWAYS.
Item On Primary On Standby
audit_file_dest/u01/app/oracle/admin/fraDB_fra1kk/adump/opt/app/oracle/admin/lonDB/adump
control_files+RECO/FRADB_FRA1KK/CONTROLFILE/current.256.990973301'+DATA','+RECO'
db_domainfrasubnet.fravcn.oraclevcn.comlonsubnet.lonvcn.oraclevcn.com
db_unique_namefraDB_fra1kklonDB
db_file_name_convert'/londb/','/fraDB_fra1kk/''/fraDB_fra1kk/','/lonDB/'
diagnostic_dest/u01/app/oracle/opt/app/oracle
dispatchers(PROTOCOL=TCP) (SERVICE=fraDBXDB)(PROTOCOL=TCP) (SERVICE=lonDBXDB)
encrypt_new_tablespacesCLOUD_ONLYALWAYS
fal_serverLONDBTNSFRADBTNS
local_listenerLISTENER_FRADBLISTENER_LONDB
log_archive_configdg_config=(fraDB_fra1kk,lonDB)dg_config=(fraDB_fra1kk,lonDB)
log_archive_dest_1location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=fraDB_fra1kklocation=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=lonDB
log_archive_dest_2service=LONDBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=lonDBservice=FRADBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=fraDB_fra1kk
log_file_name_convert'/lonDB/','/fraDB_fra1kk/''/fraDB_fra1kk/','/lonDB/'




Once the pfile modification are complete copy it to $ORACLE_HOME/dbs/initlonDB.ora. Start the DB in nomount mode and create a spfile from the pfile. Finally start the standby instance using the spfile in nomount mode. Steps are shown below.
startup nomount; -- uses pfile to start the instance
create spfile from pfile; -- spfile created as The spfile name is +DATA/spfilelonDB.ora
startup force nomount; -- start the instance using spfile
With standby instance started in nomount mode, duplication for standby could be done multiple ways. Two techniques are shown here. Use only one of them to make the standby DB.

Duplicating using active database option
To duplicate using active database option, connect to both target and auxiliary (standby) instances and create disk type rman channels. The default channel type for PaaS is tape (sbt_tape) and duplication will fail if no disk type channels are specified.
rman target / auxiliary sys/pw@londbtns

run {
allocate channel ch1 device type disk;
allocate auxiliary channel ch2 device type disk;
duplicate target database for standby from active database;
}
Duplicating on standby using primary backups
With autoback set on the PaaS primary db, the backups are stored in the OCI object storage. To simplify the duplication process create disk backups and copy them to the standby node. By default PaaS DB has encryption enabled. So use set encryption (refer 1551420.1) option to take the backup.
SET ENCRYPTION ON IDENTIFIED BY "password" ONLY;
run{
allocate channel ch1 device type disk;
backup current controlfile for standby format '/u01/app/oracle/backup/stdbycontro.ctl';
backup database format '/u01/app/oracle/backup/dbbackup%U' plus archivelog format '/u01/app/oracle/backup/archbackup%U' delete all input;
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
backup archivelog all format '/u01/app/oracle/backup/archbkp%U' delete all input;
}
Once the files are copied to the standby, run the duplication command connecting to the auxiliary channel. Set decryption option on the standby site to restore from the encrypted backups (316886.1).
rman auxiliary sys/pw

SET DECRYPTION IDENTIFIED BY "password";
run {
allocate auxiliary channel ch1 device type disk;
duplicate database for standby BACKUP LOCATION '/home/oracle/backup' dorecover nofilenamecheck;
}
Once the duplication has finished start the standby recovery and verify archive logs are being applied.
SQL> select sequence#,thread#,applied from v$archived_log;

 SEQUENCE#    THREAD# APPLIED
---------- ---------- ---------
        23          1 YES
        24          1 YES
        25          1 NO
        26          1 NO

SQL> alter database recover managed standby database disconnect;

Database altered.

SQL> select sequence#,thread#,applied from v$archived_log;

 SEQUENCE#    THREAD# APPLIED
---------- ---------- ---------
        23          1 YES
        24          1 YES
        25          1 YES
        26          1 YES
Add the standby to the Oracle restart configuration.
srvctl add database -db lonDB -oraclehome /opt/app/oracle/product/18.0.0/dbhome_1  -spfile "+DATA/spfilelonDB.ora" -role physical_standby -startoption mount -diskgroup "DATA,RECO"

srvctl config database -db lonDB
Database unique name: lonDB
Database name:
Oracle home: /opt/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/spfilelonDB.ora
Password file:
Domain:
Start options: mount
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
Services:
OSDBA group:
OSOPER group:
Database instance: lonDB
Verify the user tablespace created in primary are created as encrypted tablespaces on standby. On primary create a tablespace with default options. This tablesapce will have encryption by default.
SQL> create tablespace test datafile size 10m;

SQL>  select con_id,tablespace_name,ENCRYPTED from cdb_tablespaces;

    CON_ID TABLESPACE_NAME                ENC
---------- ------------------------------ ---
         3 SYSTEM                         NO
         3 SYSAUX                         NO
         3 UNDOTBS1                       NO
         3 TEMP                           NO
         3 USERS                          YES
         3 TEST                           YES
Once the redo changes are applied on standby verify the tablespace was created as encrypted.
SQL> select con_id,tablespace_name,ENCRYPTED from cdb_tablespaces;

    CON_ID TABLESPACE_NAME                ENC
---------- ------------------------------ ---
         3 SYSTEM                         NO
         3 SYSAUX                         NO
         3 UNDOTBS1                       NO
         3 TEMP                           NO
         3 USERS                          YES
         3 TEST                           YES
Finally create the DG broker. As both DBs are managed by clusterware, DGMGRL related static listener entries are not needed. Simply create the DG broker configuration as shown below.
DGMGRL> create configuration PaaS_IaaS_DG as primary database is fraDB_fra1kk connect identifier is fraDBTNS;
DGMGRL> add database lonDB as connect identifier is lonDBTNS;
DGMGRL> enable configuration;
Enabled.

DGMGRL> show configuration
Configuration - paas_iaas_dg

  Protection Mode: MaxPerformance
  Members:
  fradb_fra1kk - Primary database
    londb        - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 0 seconds ago)

DGMGRL> validate database lonDB

  Database Role:     Physical standby database
  Primary Database:  fradb_fra1kk

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Flashback Database Status:
    fradb_fra1kk:  Off
    londb       :  Off

  Managed by Clusterware:
    fradb_fra1kk:  YES
    londb       :  YES
This conclude the creation of the standby on the IaaS for a PaaS primary DB.



However, there are few points to consider if there's a role reversal. In this case the PaaS DB would become the standby.
DGMGRL> switchover to lonDB
Performing switchover NOW, please wait...
New primary database "londb" is opening...
Oracle Clusterware is restarting database "fradb_fra1kk" ...
Connected to "fraDB_fra1kk"
Connected to "fraDB_fra1kk"
Switchover succeeded, new primary is "londb"

DGMGRL> show configuration

Configuration - paas_iaas_dg

  Protection Mode: MaxPerformance
  Members:
  londb        - Primary database
    fradb_fra1kk - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 54 seconds ago)
When the PaaS DB becomes a standby there are few issues related to automatic backups, which are managed by Oracle.
If the DB is in mount mode the backups will fails and console will have a message saying database is in an invalid state.
Further information on this could be viewed using dbcli.
# dbcli describe-job -i ee14045b-d51b-4dbb-826d-2de4ac34b000 -j
{
  "jobId" : "ee14045b-d51b-4dbb-826d-2de4ac34b000",
  "status" : "Failure",
  "message" : "DCS-10089:Database : fraDB_fra1kk is in invalid state: DB is mounted and is not consistently down.Backup cannot be taken...",
  "reports" : [ {
If the DB is open read only mode then backups succeed with a warning. However, on the console backup is listed as failed.
The dbcli output says succeeded with a warning.
# dbcli describe-job -i c8617912-945b-44ca-82db-c001fda55846 -j
{
  "jobId" : "c8617912-945b-44ca-82db-c001fda55846",
  "status" : "Failure",
  "message" : "DCS-10001:Internal error encountered: Unable to get the rman ...output:\nSTATUS\n--\n[COMPLETED WITH WARNINGS]\n\n error:.",
  "reports" : [ {
    "taskId" : "TaskZLockWrapper_1130",
    "taskName" : "task:TaskZLockWrapper_1130",
    "taskResult" : "DCS-10001:Internal error encountered: Unable to ... output:\nSTATUS\n--\n[COMPLETED WITH WARNINGS]\n\n error:.",
Backup job details view also reflect this information.
TIME_TAKEN_DISPLAY   RMAN_END_TIME              I_SIZE_GIG O_SIZE_GIG COMPRESSION_RATIO STATUS                  INPUT_TYPE    OUTPUT_DEVICE_TYP
-------------------- -------------------------- ---------- ---------- ----------------- ----------------------- ------------- -----------------
00:00:53             09-nov-2018 06:31          3.27423096 .743408203         4.4043514 COMPLETED WITH WARNINGS DB INCR       SBT_TAPE
00:00:56             08-nov-2018 06:21          2.62056637 .043457031         60.302471 COMPLETED WITH WARNINGS DB INCR       SBT_TAPE
00:00:47             07-nov-2018 06:21          .839805126 .115478516        7.27239281 COMPLETED               DB INCR       SBT_TAPE
00:00:47             06-nov-2018 06:22          .796207905 .108886719        7.31225914 COMPLETED               DB INCR       SBT_TAPE
The reason for this is RMAN-06820 warning encountered when the backup is taken. This warning is shown when standby is unable to connect to primary and archive the latest redo log before the backup. The full backup log on the PaaS is shown below.
Starting backup at 2018/11/09 06:31:41
RMAN-06820: warning: failed to archive current log at primary database
cannot connect to remote database
using channel ORA_SBT_TAPE_1
using channel ORA_SBT_TAPE_2
using channel ORA_SBT_TAPE_3
using channel ORA_SBT_TAPE_4
using channel ORA_SBT_TAPE_5
specification does not match any archived log in the repository
backup cancelled because there are no files to backup
Finished backup at 2018/11/09 06:31:42

Starting Control File and SPFILE Autobackup at 2018/11/09 06:31:42
piece handle=c-1042410484-20181109-00 comment=API Version 2.0,MMS Version 12.2.0.2
Finished Control File and SPFILE Autobackup at 2018/11/09 06:31:43

Starting backup at 2018/11/09 06:31:45
using channel ORA_SBT_TAPE_1
using channel ORA_SBT_TAPE_2
using channel ORA_SBT_TAPE_3
using channel ORA_SBT_TAPE_4
using channel ORA_SBT_TAPE_5
channel ORA_SBT_TAPE_1: starting compressed full datafile backup set
channel ORA_SBT_TAPE_1: specifying datafile(s) in backup set
channel ORA_SBT_TAPE_2: starting compressed full datafile backup set
channel ORA_SBT_TAPE_2: specifying datafile(s) in backup set
including current SPFILE in backup set
channel ORA_SBT_TAPE_2: starting piece 1 at 2018/11/09 06:31:46
including current control file in backup set
channel ORA_SBT_TAPE_1: starting piece 1 at 2018/11/09 06:31:47
channel ORA_SBT_TAPE_2: finished piece 1 at 2018/11/09 06:31:47
piece handle=DBTRegular-L01541744341311Ir3_spf_FRADB_1042410484_66thov6i_1_1_20181109_991722706 tag=DBTREGULAR-L01541744341311IR3 comment=API Version 2.0,MMS Version 12.2.0.2
channel ORA_SBT_TAPE_2: backup set complete, elapsed time: 00:00:01
channel ORA_SBT_TAPE_1: finished piece 1 at 2018/11/09 06:31:48
piece handle=DBTRegular-L01541744341311Ir3_cf_FRADB_1042410484_65thov6i_1_1_20181109_991722706 tag=DBTREGULAR-L01541744341311IR3 comment=API Version 2.0,MMS Version 12.2.0.2
channel ORA_SBT_TAPE_1: backup set complete, elapsed time: 00:00:01
Finished backup at 2018/11/09 06:31:48

Starting Control File and SPFILE Autobackup at 2018/11/09 06:31:48
piece handle=c-1042410484-20181109-01 comment=API Version 2.0,MMS Version 12.2.0.2
Finished Control File and SPFILE Autobackup at 2018/11/09 06:31:51
As seen from the above output, beside the initial warning there's no errors related to the actual backup. Reason for archiving the primary redo before standby backup is to make the backup consistent. More on this warning could be found on 2314136.1, 2025142.1 and 2098273.1 This issue could be fixed by connecting to the standby with a username and password when using rman. In PaaS DB this would require altering the backup agent login.

Related Posts
Removing a Standby Database From a Data Guard Configuration
Adding a New Physical Standby to Exiting Data Guard Setup
Creating Data Guard Broker for an Existing 12.2 Data Guard Setup with CDB
Oracle Data Guard on 12.2 CDB with Oracle Restart
Creating Data Guard Broker on 12c
Adding Far Sync Instances to Existing Data Guard Configuration
11gR2 RAC to RAC Data Guard
Data Guard Broker for 11gR2 RAC
11gR2 Standalone Data Guard (with ASM and Role Separation)
Setting Up Data Guard Broker for an Existing Data Guard Configuration with Far Sync
Enable Active Dataguard on 11gR2 RAC Standby



Primary (PaaS DB) pfile
*._datafile_write_errors_crash_instance=false
*._db_writer_coalesce_area_size=16777216
*._disable_interface_checking=TRUE
*._enable_numa_support=FALSE
*._file_size_increase_increment=2143289344
*._fix_control='18960760:on'
*._gc_policy_time=20
*._gc_undo_affinity=TRUE
*.audit_file_dest='/u01/app/oracle/admin/fraDB_fra1kk/adump'
*.audit_sys_operations=TRUE
*.audit_trail='db'
*.compatible='18.0.0.0'
*.control_file_record_keep_time=38
*.control_files='+RECO/FRADB_FRA1KK/CONTROLFILE/current.256.990973301'
*.control_management_pack_access='DIAGNOSTIC+TUNING'
*.cpu_count=0
*.cursor_sharing='EXACT'
*.db_block_checking='FULL'
*.db_block_checksum='FULL'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+RECO'
*.db_domain='frasubnet.fravcn.oraclevcn.com'
*.db_file_name_convert='/londb/','/fraDB_fra1kk/'
*.db_files=1024
*.db_lost_write_protect='TYPICAL'
*.db_name='fraDB'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=34816m
*.db_unique_name='fraDB_fra1kk'
*.diagnostic_dest='/u01/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=fraDBXDB)'
*.enable_ddl_logging=FALSE
*.enable_pluggable_database=true
*.encrypt_new_tablespaces='CLOUD_ONLY'
*.fal_server='LONDBTNS'
*.fast_start_mttr_target=300
*.filesystemio_options='setall'
*.global_names=TRUE
*.local_listener='LISTENER_FRADB'
*.log_archive_config='dg_config=(fraDB_fra1kk,lonDB)'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=fraDB_fra1kk'
*.log_archive_dest_2='service=LONDBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=lonDB'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.log_buffer=33554432
*.log_file_name_convert='/londb/','/fraDB_fra1kk/'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.os_authent_prefix='ops$'
*.parallel_execution_message_size=16384
*.parallel_threads_per_cpu=2
*.pga_aggregate_limit=30720m
*.pga_aggregate_target=15360m
*.processes=800
*.remote_login_passwordfile='EXCLUSIVE'
*.session_cached_cursors=100
*.sga_target=30720m
*.sql92_security=TRUE
*.standby_file_management='AUTO'
*.undo_retention=900
*.undo_tablespace='UNDOTBS1'
*.use_large_pages='false'
Standby (DB on IaaS) pfile
*._datafile_write_errors_crash_instance=false
*._db_writer_coalesce_area_size=16777216
*._disable_interface_checking=TRUE
*._enable_numa_support=FALSE
*._file_size_increase_increment=2143289344
*._fix_control='18960760:on'
*._gc_policy_time=20
*._gc_undo_affinity=TRUE
*.audit_file_dest='/opt/app/oracle/admin/lonDB/adump'
*.audit_sys_operations=TRUE
*.audit_trail='db'
*.compatible='18.0.0.0'
*.control_file_record_keep_time=38
*.control_files='+DATA','+RECO'
*.control_management_pack_access='DIAGNOSTIC+TUNING'
*.cpu_count=0
*.cursor_sharing='EXACT'
*.db_block_checking='FULL'
*.db_block_checksum='FULL'
*.db_block_size=8192
*.db_create_file_dest='+DATA'
*.db_create_online_log_dest_1='+RECO'
*.db_domain='lonsubnet.lonvcn.oraclevcn.com'
*.db_file_name_convert='/fraDB_fra1kk/','/lonDB/'
*.db_files=1024
*.db_lost_write_protect='TYPICAL'
*.db_name='fraDB'
*.db_recovery_file_dest='+RECO'
*.db_recovery_file_dest_size=34816m
*.db_unique_name='lonDB'
*.diagnostic_dest='/opt/app/oracle'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=lonDBXDB)'
*.enable_ddl_logging=FALSE
*.enable_pluggable_database=true
*.encrypt_new_tablespaces='ALWAYS'
*.fal_server='FRADBTNS'
*.fast_start_mttr_target=300
*.filesystemio_options='setall'
*.global_names=TRUE
*.local_listener='LISTENER_LONDB'
*.log_archive_config='dg_config=(fraDB_fra1kk,lonDB)'
*.log_archive_dest_1='location=use_db_recovery_file_dest valid_for=(all_logfiles,all_roles) db_unique_name=lonDB'
*.log_archive_dest_2='service=FRADBTNS LGWR ASYNC NOAFFIRM max_failure=10 max_connections=5 reopen=180 valid_for=(online_logfiles,primary_role) db_unique_name=fraDB_fra1kk'
*.log_archive_dest_state_1='enable'
*.log_archive_dest_state_2='enable'
*.log_archive_format='%t_%s_%r.dbf'
*.log_archive_max_processes=10
*.log_buffer=33554432
*.log_file_name_convert='/fraDB_fra1kk/','/lonDB/'
*.nls_language='AMERICAN'
*.nls_territory='AMERICA'
*.open_cursors=1000
*.os_authent_prefix='ops$'
*.parallel_execution_message_size=16384
*.parallel_threads_per_cpu=2
*.pga_aggregate_limit=30720m
*.pga_aggregate_target=15360m
*.processes=800
*.remote_login_passwordfile='EXCLUSIVE'
*.session_cached_cursors=100
*.sga_target=30720m
*.sql92_security=TRUE
*.standby_file_management='AUTO'
*.undo_retention=900
*.undo_tablespace='UNDOTBS1'
*.use_large_pages='false'

Thursday, November 8, 2018

DBCA Templates and PDBs - 12.1, 12.2 and 18c

This post looks the option of creating a database using a template, where the template was created from a CDB containing a PDB. During the testing it was observed that 12.1, 12.2 and 18c behaves differently.
The following steps were done on databases of all three versions. Two additional tablespaces were created in the root container (called ROOTBS) and in the PBD (called TEST).
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBONE                         READ WRITE NO

    CON_ID NAME
---------- -------
         1 USERS
         1 ROOTBS
         1 SYSAUX
         1 SYSTEM
         1 UNDOTBS1
         1 TEMP
         2 SYSAUX
         2 SYSTEM
         2 TEMP
         3 SYSAUX
         3 TEMP
         3 TEST
         3 SYSTEM
The basic template creation steps are shown below (for 12.1 only).



Creating CDB Using Template in 12.1
Select the template created earlier during the create database using DBCA. The show details button will give a detail list of tablespace included in the template. In this case it shows the two tablespaces created earlier.
Summary page also list the tablespaces.
During the DB creation process following error is shown but DBCA is able to run to completion.
At the end of the CDB creation, the newly created CDB will have a PDB, with the same name as the PDB where the template was created from.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBONE                         READ WRITE NO
The new CDB will also have the both root container level tablespace and PDB level tablespace.

SQL> select con_id,name from v$tablespace order by 1;

    CON_ID NAME
---------- ------------------------------
         1 ROOTBS
         1 USERS
         1 SYSAUX
         1 SYSTEM
         1 UNDOTBS1
         1 TEMP
         2 SYSAUX
         2 SYSTEM
         2 TEMP
         3 SYSTEM
         3 SYSAUX
         3 TEMP
         3 TEST

Creating CDB Using Template in 12.2
Similar to 12.1 a template was created on 12.2 DB. Additional tablespace created for root container and on PDB.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 DEVPDB                         READ WRITE NO

SQL>  select con_id,name from v$tablespace order by 1;

    CON_ID NAME
---------- ------------------------------
         1 USERS
         1 SYSAUX
         1 TEMP
         1 SYSTEM
         1 UNDOTBS1
         1 ROOTBS
         2 TEMP
         2 UNDOTBS1
         2 SYSAUX
         2 SYSTEM
         3 SYSAUX
         3 UNDOTBS1
         3 TEMP
         3 USERS
         3 ASANGA
         3 SYSTEM
The DB creation was done using the template.
The summary page also list the additional tablespaces.
However due to existing issue (tracked under bug 26921308) DB creation fails.



Creating CDB Using Template in 18c
The 18c database had following additional tablespaces for root container and PDB.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 DOCKLAND                       READ WRITE NO

SQL> select con_id,name from v$tablespace order by 1;

    CON_ID NAME
---------- ------------------------------
         1 USERS
         1 SYSAUX
         1 TEMP
         1 SYSTEM
         1 UNDOTBS1
         1 ROOTBS
         2 TEMP
         2 UNDOTBS1
         2 SYSAUX
         2 SYSTEM
         3 SYSAUX
         3 UNDOTBS1
         3 TEMP
         3 USERS
         3 ASANGA
         3 SYSTEM
The Database was created using the template which showed the tablespaces for PDB as well.
However, the newly created CDB didn't have any new PDBs. But it had a the root container tablespaces which was listed on the template.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO

SQL> select con_id,name from v$tablespace order by 1;

    CON_ID NAME
---------- ------------------------------
         1 SYSTEM
         1 UNDOTBS1
         1 ROOTBS
         1 TEMP
         1 USERS
         1 SYSAUX
         2 TEMP
         2 SYSAUX
         2 UNDOTBS1
         2 SYSTEM
This shows that DBCA templates for CDB and for PDBs other methods such as cloning or transporting must be used.

Related Metalink Notes
Can PDB Templates be Created in DBCA? [ID 2128673.1]
12.2 Dbca: Custom Template Doesn't Work As Expected With Cdb Option [ID 2283829.1]
ORA-65101 Wrong option for CDB parameter in a Template which created by DBCA with silent mode [ID 2270420.1]
12.2 DBCA TEMPLATE NOT SAVE SETTING FOR "INCLUDE IN PDBS" OF "DATABASE OPTIONS" [ID 2274760.1]

Thursday, November 1, 2018

Verifying Redo Transport Compression in 11.2, 12.1 and 12.2 Data Guard Configuraitons

Redo shipped to standby could be compressed for transport in two ways. One is external to the database and uses SSH tunnelling. Other method is enabling compression for the remote log archive destination. Second option requires advance compression license. Once redo transport compression (RTC) is enabled it could verified by enabling log archive tracing. However, there's a difference as to level of tracing and information for DB versions 11.2, 12.1 and 12.2. This posts explore these variations.

Verifying RTC in 11.2.0.4
The DG configuration for 11.2 is as follows
DGMGRL> show configuration

Configuration - db_dg

  Protection Mode: MaxAvailability
  Databases:
    prod  - Primary database
    stdby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
Enable RTC on the standby database. This updates the log archive destination setting on the primary by adding compression=enable option.
DGMGRL> edit database stdby set property RedoCompression='enable';
Property "redocompression" updated
On the alert log of the primary following could be seen.
Tue Aug 14 14:29:41 2018
ALTER SYSTEM SET log_archive_dest_3='service="stdbytns"','LGWR SYNC AFFIRM delay=0 optional compression=enable max_failure=10 max_connections=5 reopen=180 db_unique_name="stdby" net_timeout=30','valid_for=(all_logfiles,primary_role)' SCOPE=BOTH;
Enabling log archive tracing level of 1 for primary database
DGMGRL>  edit database prod set property LogArchiveTrace='1';
Property "logarchivetrace" updated
Enable log archive tracing level of 16 for standby
DGMGRL> edit database stdby set property LogArchiveTrace='16';
Property "logarchivetrace" updated
With these setting in place on the primary *nss*.trc files will contain entries similar to following to show that RTC is enabled.
RTC: Compression ratio for this I/O is 81%
RTC: Compression ratio for this I/O is 81%
RTC: Compression ratio for this I/O is 81%
RTC: Compression ratio for this I/O is 80%
On the standby the decompression information is shown in *rsf*.trc files as follows
RTC: decompressed 94 bytes to 512 bytes
RTC: decompressed 93 bytes to 512 bytes
RTC: decompressed 99 bytes to 512 bytes
RTC: decompressed 88 bytes to 512 bytes
RTC: decompressed 5221 bytes to 24576 bytes
If the tracing on the primary is increased to 16 (which Oracle document says as "Track detailed ARCHIVELOG destination activity") only limited RTC information is given out.
edit database prod set property LogArchiveTrace='16';
Both *arc*.trc and *lgwr*.trc files will have entries similar to following.
cat prod_arc7_3372.trc | grep RTC
RTC enabled.
cat prod_lgwr_3297.trc | grep RTC
RTC enabled.
RTC enabled.
Verifying RTC in 12.1.0.2
The 12.1.0.2 DG configuration is as follows
DGMGRL> show configuration;

Configuration - ent12c1_dgb

  Protection Mode: MaxAvailability
  Members:
  ent12c1  - Primary database
    fs12c1   - Far sync instance
      ent12c1s - Physical standby database

  Members Not Receiving Redo:
  fs12c1s  - Far sync instance

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS
On 12.1 setting log archive trace to 1 has no effect on tracing RTC. Setting the value to 16 on primary gives descriptive RTC information which indicates if RTC is enabled or not.
DGMGRL>  edit database ent12c1 set property logarchivetrace='16';
Property "logarchivetrace" updated
The RTC related information appear in *arc*.trc files and *tt*.trc files
cat ent12c1_arc3_15498.trc | grep RTC
RTC enabled.
RTC: Allocating 11534336 buffer bytes for compression.
RTC: header-batch write block 1, count 1 - RTC disabled
RTC: Enable compression for subsequent batches
RTC: actual compressed bytes# 1295

cat ent12c1_tt01_15532.trc | grep RTC
RTC enabled.
Re-enabling RTC following backoff interval 1.
RTC: actual compressed bytes# 10270
RTC: actual compressed bytes# 536
RTC: actual compressed bytes# 1697
RTC: actual compressed bytes# 508
Redo compression could be enabled on far sync instances as well. Setting trace level to 16 on far sync instance will also generate RTC information on far sync trace files.
DGMGRL> edit far_sync fs12c1 set property RedoCompression='enable';
Property "redocompression" updated

DGMGRL> edit far_sync fs12c1 set property LogArchiveTrace='16';
Property "logarchivetrace" updated

cat fs12c1_tt00_2515.trc | grep RTC
RTC: actual compressed bytes# 97
RTC: actual compressed bytes# 79
To get decompression related information from standby set log archive trace value to 1024 (1024: Tracks RFS physical client).
DGMGRL> edit database ent12c1s set property logarchivetrace='1024';
Property "logarchivetrace" updated
Decompression related information is shown in *rfs*.trc files.
cat ent12c1s_rfs_8859.trc | grep RTC
RTC: decompressed 79 bytes to 512 bytes
RTC: decompressed 81 bytes to 512 bytes
RTC: decompressed 97 bytes to 512 bytes
RTC: decompressed 79 bytes to 512 bytes


Verifying RTC in 12.2.0.1 (and 18.3.0.0)
The 12.2 DG configuration is as follows
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  prodcdb - Primary database
    stbycdb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 35 seconds ago)
On 12.2 setting log archive trace to 16 on primary generates the RTC information.
DGMGRL> edit database prodcdb set property LogArchiveTrace='16';
Property "logarchivetrace" updated
The RTC information is available on *lgwr*.trc and *nss*.trc files.
cat prodcdb_lgwr_2616.trc | grep RTC
RTC enabled.
RTC enabled.
RTC enabled.

cat prodcdb_nss2_2708.trc | grep RTC
RTC: Enable compression following header-only I/O
RTC: Enable compression following header-only I/O
RTC: Enable compression following header-only I/O
No other values such as 512 (512: Tracks LGWR redo shipping network activity) or combination of values seems to give more descriptive information such as compression ratio or compressed bytes as in the case with previous versions. However, decompression information on standby could be traced with 1024.
DGMGRL> edit database stbycdb set property LogArchiveTrace='1024';
Property "logarchivetrace" updated
Decompression information available in *rfs*.trc file.
cat stbycdb_rfs_2243.trc
krsr_rfs_wda: RTC: decompressed 82 bytes to 512 bytes
krsr_rfs_wda: RTC: decompressed 81 bytes to 512 bytes
krsr_rfs_wda: RTC: decompressed 675 bytes to 1536 bytes
krsr_rfs_wda: RTC: decompressed 1167 bytes to 3072 bytes
krsr_rfs_wda: RTC: decompressed 76 bytes to 512 bytes
Useful Metalink Notes
How to confirm if Redo Transport Compression is used In Oracle Data Guard? [ID 1927057.1]
Redo Transport Compression in a Data Guard Environment [ID 729551.1]
LOG_ARCHIVE_TRACE=16 DOES NOT SHOW REDO COMPRESSION DETAIL IN LOG TRANSPORT in 11gr2 [ID 2193605.1]
How to find out the compression rate of Redo Transport Compression ? [ID 1490751.1]

Thursday, October 25, 2018

UCP Connections Fail to Connect to DB in Mount Mode With ORA-12504 TNS:listener was not given the SID in CONNECT_DATA

Trying to connect to a database (non-CDB, CDB or PDB) in mount mode as sysdba using a UCP JDBC connection fails with
Exception in thread "main" java.sql.SQLException: Exception occurred while getting connection: oracle.ucp.UniversalConnectionPoolException: Cannot get Connection from Datasource: java.sql.SQLException: Listener refused the connection with the following error:
ORA-12504, TNS:listener was not given the SID in CONNECT_DATA
However there's no failure when OracleDataSource is used instead of UCP. Issue with UCP only appears in JDBC Driver versions 18.3.0.0.0 and 12.2.0.1.0.
No issue in connecting to database in mount mode with driver versions 12.1.0.2.0 and 11.2.0.4.0. This appears to be a bug on later version of JDBC drivers. After an SR this is been investigated under bug# 28780778.

Follow Java code could be used to recreate the issue. Change the ojdbc*.jar and ucp.jar as needed to try different drivers.
import java.io.File;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.Properties;
import java.util.StringTokenizer;
import oracle.jdbc.pool.OracleDataSource;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.PoolDataSourceFactory;

/**
 *
 * @author Asanga
 */
public class ConnectToMount {

    public static void main(String[] args) throws SQLException {

        String username = "sys";
        String password = "xxxxx";
        boolean isSysdba = true;
        String URL = "jdbc:oracle:thin:@192.168.0.79:1521/pdb";

        System.out.println("Driver Info");
        Connection con = usingODS(username, password, URL, isSysdba);
        printDriverInfo(con);
        
        System.out.println("\nusing ODS");
        runQuery(con);

        System.out.println("\nusing UCP");
        runQuery(usingUCP(username, password, URL, isSysdba));

    }

    static void runQuery(Connection con) throws SQLException {

        PreparedStatement pr = con.prepareStatement("select db_unique_name from v$database");

        ResultSet rs = pr.executeQuery();
        while (rs.next()) {

            System.out.println(rs.getString(1));
        }

        rs.close();
        pr.close();
        con.close();
    }

    static Connection usingUCP(String username, String password, String URL, boolean isSysDBA) throws SQLException {

        PoolDataSource ds = PoolDataSourceFactory.getPoolDataSource();
        ds.setConnectionFactoryClassName("oracle.jdbc.pool.OracleDataSource");
        ds.setUser(username);
        ds.setPassword(password);
        ds.setURL(URL);

        if (isSysDBA) {
            Properties p = new Properties();
            p.put("internal_logon", "sysdba");

            ds.setConnectionProperties(p);
        }

        return ds.getConnection();

    }

    static Connection usingODS(String username, String password, String URL, boolean isSysDBA) throws SQLException {

        OracleDataSource ds = new OracleDataSource();
        ds.setUser(username);
        ds.setPassword(password);
        ds.setURL(URL);

        if (isSysDBA) {

            Properties p = new Properties();
            p.put("internal_logon", "sysdba");

            ds.setConnectionProperties(p);
        }

        return ds.getConnection();

    }

    static void printDriverInfo(Connection con) throws SQLException {

        DatabaseMetaData meta = con.getMetaData();

        System.out.println("Driver Name " + meta.getDriverName());
        System.out.println("Driver Version " + meta.getDriverVersion());
        System.out.println("Driver Major Version " + meta.getDriverMajorVersion());
        System.out.println("Driver Minor Version " + meta.getDriverMinorVersion());
        System.out.println("Database Major Version " + meta.getDatabaseMajorVersion());
        System.out.println("Database Minor Version " + meta.getDatabaseMinorVersion());

        java.util.Properties props = System.getProperties();
        System.out.println("\nJVM\n===");
        System.out.println(props.getProperty("java.vm.vendor"));
        System.out.println(props.getProperty("java.vm.name"));
        System.out.println(props.getProperty("java.vm.version"));
        System.out.println(props.getProperty("java.version"));
    }
}

Sunday, October 14, 2018

Upgrading Oracle Restart Databases From 12.2 to 18c Using DBMS_ROLLING

DBMS_ROLLING allows upgrade of databases in a data guard configuration with minimum down time. In short the dbms_rolling upgrade process divide the databases in the data guard configuration in to leading group and a trailing group. It first upgrade the leading group databases, which include the future primary and standby to protect the future primary. After a switchover, the only time a downtime is encountered, the trailing group it upgraded. More on this could be read on data guard admin guide. Ideally one would need at least four databases in the DG configuration to illustrate the best use case of dbms_rolling. But same approach could be used for two database DG configuration as well(14.6 example). Only difference between the two scenarios with lower number of databases the setup is not protected during certain steps.
This post list the step of upgrading 12.2 Oracle restart databases in a DG configuration to 18c. The post primarily focuses only on the dbms_rolling aspect of it. An earlier post is available for upgrading Oracle restart from 12.2 to 18c.
Some of the enhancements made on dbms_rolling such as keeping dg broker enabled was mentioned in a previous post. The current data guard and database roles are as follows
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    london  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 25 seconds ago)
1. Upgrade the GI on the standby first. This would require downtime on the standby, during which primary will be unprotected. Primary's GI is not upgraded at this time, only the standby GI is upgraded. This way there's only one downtime for overall system. The listener.ora file gets copied over to 18c GI home automatically. However, if static listener entries were added on this file then ORACLE_HOME values require changing to reflect the 18c home. Once the GI home is upgraded install 18c DB software. Refer earlier post on 12.2 to 18c for upgrading GI and installing DB software. Copy the tnsnames.ora and password files to relevant directories in the 18c DB home. At the end of this first step the software versions on primary and standby is as follows.
primary :- GI = 12.2, DB 12.2
standby :- GI = 18.0 DB running out of 12.2  home , 18.0 DB software installed
2. Start the rolling upgrade process by executing initializing phase and check plan parameters.
SQL> exec DBMS_ROLLING.INIT_PLAN(future_primary=>'london');
PL/SQL procedure successfully completed.

select scope, name, curval from dba_rolling_parameters order by scope, name;
3. Execute build plan and check plan values.
SQL> exec DBMS_ROLLING.BUILD_PLAN;
PL/SQL procedure successfully completed.

SELECT instid, target, phase, description FROM DBA_ROLLING_PLAN;
4. Start the plan. This phase, among other things will convert the physical standby to a transient logical standby. Detail list of work carried out during this phase was mentioned in an earlier post.
SQL> exec DBMS_ROLLING.START_PLAN;
PL/SQL procedure successfully completed.
DG Broker status for configuration and databases after start of the plan is as follows
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    london  - Transient logical standby database

Fast-Start Failover: DISABLED

Configuration Status:
    ROLLING DATABASE MAINTENANCE IN PROGRESS

DGMGRL> show database london

Database - london

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 2.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    london

  Database Warning(s):
    ORA-16866: database converted to transient logical standby database for rolling database maintenance

Database Status:
WARNING

DGMGRL> show database colombo

Database - colombo

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    colombo

Database Status:
SUCCESS


5. Upgrade of the standby could be carried out now as it is now a logical standby and open in read/write mode. Run preupgrade on transient logical standby
$ORACLE_HOME/jdk/bin/java -jar preupgrade.jar TERMINAL
Report generated by Oracle Database Pre-Upgrade Information Tool Version
18.0.0.0.0 on 2018-08-29T08:55:03

Upgrade-To version: 18.0.0.0.0

=======================================
Status of the database prior to upgrade
=======================================
      Database Name:  COLOMBO
     Container Name:  COLOMBO
       Container ID:  0
            Version:  12.2.0.1.0
         Compatible:  12.2.0
          Blocksize:  8192
           Platform:  Linux x86 64-bit
      Timezone File:  26
  Database log mode:  ARCHIVELOG
           Readonly:  FALSE
            Edition:  EE

  Oracle Component                       Upgrade Action    Current Status
  ----------------                       --------------    --------------
  Oracle Server                          [to be upgraded]  VALID
  Oracle Workspace Manager               [to be upgraded]  VALID
  Oracle Text                            [to be upgraded]  VALID
  Oracle XML Database                    [to be upgraded]  VALID

==============
BEFORE UPGRADE
==============

  REQUIRED ACTIONS
  ================
  1.  (AUTOFIXUP) Empty the RECYCLEBIN immediately before database upgrade.

      The database contains 6 objects in the recycle bin.

      The recycle bin must be completely empty before database upgrade.

  RECOMMENDED ACTIONS
  ===================
  2.  Run 12.2.0.1.0 $ORACLE_HOME/rdbms/admin/utlrp.sql to recompile invalid
      objects.  You can view the individual invalid objects with

        SET SERVEROUTPUT ON;
        EXECUTE DBMS_PREUP.INVALID_OBJECTS;

      89 objects are INVALID.

      There should be no INVALID objects in SYS/SYSTEM or user schemas before
      database upgrade.

  3.  Review and remove any unnecessary HIDDEN/UNDERSCORE parameters.

      The database contains the following initialization parameters whose name
      begins with an underscore:

      _rac_dbtype_reset
      _transient_logical_clear_hold_mrp_bit

      Remove hidden parameters before database upgrade unless your application
      vendors and/or Oracle Support state differently.  Changes will need to be
      made in the pfile/spfile.

  4.  (AUTOFIXUP) Gather stale data dictionary statistics prior to database
      upgrade in off-peak time using:

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

      Dictionary statistics do not exist or are stale (not up-to-date).

      Dictionary statistics help the Oracle optimizer find efficient SQL
      execution plans and are essential for proper upgrade timing. Oracle
      recommends gathering dictionary statistics in the last 24 hours before
      database upgrade.

      For information on managing optimizer statistics, refer to the 12.2.0.1
      Oracle Database SQL Tuning Guide.

  INFORMATION ONLY
  ================
  5.  To help you keep track of your tablespace allocations, the following
      AUTOEXTEND tablespaces are expected to successfully EXTEND during the
      upgrade process.

                                                 Min Size
      Tablespace                        Size     For Upgrade
      ----------                     ----------  -----------
      SYSAUX                            1190 MB      1302 MB
      SYSTEM                             700 MB      1048 MB
      TEMP                                20 MB       150 MB

      Minimum tablespace sizes for upgrade are estimates.

  ORACLE GENERATED FIXUP SCRIPT
  =============================
  All of the issues in database COLOMBO
  which are identified above as BEFORE UPGRADE "(AUTOFIXUP)" can be resolved by
  executing the following

    SQL>@/opt/app/oracle/cfgtoollogs/london/preupgrade/preupgrade_fixups.sql

=============
AFTER UPGRADE
=============

  REQUIRED ACTIONS
  ================
  None

  RECOMMENDED ACTIONS
  ===================
  6.  Upgrade the database time zone file using the DBMS_DST package.

      The database is using time zone file version 26 and the target 18.0.0.0.0
      release ships with time zone file version 31.

      Oracle recommends upgrading to the desired (latest) version of the time
      zone file.  For more information, refer to "Upgrading the Time Zone File
      and Timestamp with Time Zone Data" in the 18.0.0.0.0 Oracle Database
      Globalization Support Guide.

  7.  (AUTOFIXUP) Gather dictionary statistics after the upgrade using the
      command:

        EXECUTE DBMS_STATS.GATHER_DICTIONARY_STATS;

      Oracle recommends gathering dictionary statistics after upgrade.

      Dictionary statistics provide essential information to the Oracle
      optimizer to help it find efficient SQL execution plans. After a database
      upgrade, statistics need to be re-gathered as there can now be tables
      that have significantly changed during the upgrade or new tables that do
      not have statistics gathered yet.

  8.  Gather statistics on fixed objects after the upgrade and when there is a
      representative workload on the system using the command:

        EXECUTE DBMS_STATS.GATHER_FIXED_OBJECTS_STATS;

      This recommendation is given for all preupgrade runs.

      Fixed object statistics provide essential information to the Oracle
      optimizer to help it find efficient SQL execution plans.  Those
      statistics are specific to the Oracle Database release that generates
      them, and can be stale upon database upgrade.

      For information on managing optimizer statistics, refer to the 12.2.0.1
      Oracle Database SQL Tuning Guide.

  ORACLE GENERATED FIXUP SCRIPT
  =============================
  All of the issues in database COLOMBO
  which are identified above as AFTER UPGRADE "(AUTOFIXUP)" can be resolved by
  executing the following

    SQL>@/opt/app/oracle/cfgtoollogs/london/preupgrade/postupgrade_fixups.sql


==================
PREUPGRADE SUMMARY
==================
  /opt/app/oracle/cfgtoollogs/london/preupgrade/preupgrade.log
  /opt/app/oracle/cfgtoollogs/london/preupgrade/preupgrade_fixups.sql
  /opt/app/oracle/cfgtoollogs/london/preupgrade/postupgrade_fixups.sql

Execute fixup scripts as indicated below:

Before upgrade log into the database and execute the preupgrade fixups
@/opt/app/oracle/cfgtoollogs/london/preupgrade/preupgrade_fixups.sql

After the upgrade:

Log into the database and execute the postupgrade fixups
@/opt/app/oracle/cfgtoollogs/london/preupgrade/postupgrade_fixups.sql

Preupgrade complete: 2018-08-29T08:55:03
Execute the prefixup script.
SQL> @/opt/app/oracle/cfgtoollogs/london/preupgrade/preupgrade_fixups.sql
Executing Oracle PRE-Upgrade Fixup Script

Auto-Generated by:       Oracle Preupgrade Script
                         Version: 18.0.0.0.0 Build: 1
Generated on:            2018-08-29 08:55:01

For Source Database:     COLOMBO
Source Database Version: 12.2.0.1.0
For Upgrade to Version:  18.0.0.0.0

Preup                             Preupgrade
Action                            Issue Is
Number  Preupgrade Check Name     Remedied    Further DBA Action
------  ------------------------  ----------  --------------------------------
    1.  purge_recyclebin          YES         None.
    2.  invalid_objects_exist     NO          Manual fixup recommended.
    3.  hidden_params             NO          Informational only.
                                              Further action is optional.
    4.  dictionary_stats          YES         None.
    5.  tablespaces_info          NO          Informational only.
                                              Further action is optional.

The fixup scripts have been run and resolved what they can. However,
there are still issues originally identified by the preupgrade that
have not been remedied and are still present in the database.
Depending on the severity of the specific issue, and the nature of
the issue itself, that could mean that your database is not ready
for upgrade.  To resolve the outstanding issues, start by reviewing
the preupgrade_fixups.sql and searching it for the name of
the failed CHECK NAME or Preupgrade Action Number listed above.
There you will find the original corresponding diagnostic message
from the preupgrade which explains in more detail what still needs
to be done.
Some of the hidden parameters were added as part of the rolling upgrade. Leave them as it is. Run DBUA to from 18c home to upgrade the standby from 12.2 to 18c.



6. After the upgrade run postfixup script.
@/opt/app/oracle/cfgtoollogs/london/preupgrade/postupgrade_fixups.sql
Querying the DG status at this point will show standby DB is disable and as a physical standby. This is wrong information.
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

    london  - Physical standby database (disabled)

Fast-Start Failover: DISABLED

Configuration Status:
    ROLLING DATABASE MAINTENANCE IN PROGRESS

DGMGRL> show database london

Database - london

  Role:               PHYSICAL STANDBY
  Intended State:     OFFLINE
  Transport Lag:      (unknown)
  Apply Lag:          (unknown)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    london

Database Status:
SHUTDOWN
Enabling the standby doesn't resolve this either.
DGMGRL> enable database london
Enabled.

DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

    london  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
    ROLLING DATABASE MAINTENANCE IN PROGRESS

DGMGRL> show database london

Database - london

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          25 minutes 14 seconds (computed 0 seconds ago)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    london

  Database Error(s):
    ORA-16816: incorrect database role
Database Status:
ERROR
However querying the database role shows it's a logical standby.

SQL> select database_role from v$database;

DATABASE_ROLE
----------------
LOGICAL STANDBY
At the end of the upgrade the transient logical standby must be open in read/write mode. However, the Oracle restart setting still has the old role and startup state associated with the standby database.
srvctl config database -d london
Database unique name: london
Database name:
Oracle home: /opt/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: /opt/app/oracle/product/18.0.0/dbhome_1/dbs/spfilelondon.ora
Password file:
Domain: domain.net
Start options: mount
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Disk Groups: DATA,FRA
Services: flt,bx
OSDBA group:
OSOPER group:
Database instance: london
Due to this reason the DB doesn't open in read/write mode but only in mount mode. As DG broker is associated with the Oracle restart configuration it also take the database role to be a physical standby. In this status trying to switchover will also fail.
SQL> exec DBMS_ROLLING.SWITCHOVER;
BEGIN DBMS_ROLLING.SWITCHOVER; END;

*
ERROR at line 1:
ORA-45427: logical standby Redo Apply process was not running
ORA-06512: at "SYS.DBMS_ROLLING", line 89
ORA-06512: at line 1
7. To fix this manually start the standby DB in read/write mode and also start the standby apply process.
SQL> ALTER DATABASE START LOGICAL STANDBY APPLY;

Database altered.
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    london  - Transient logical standby database

Fast-Start Failover: DISABLED

Configuration Status:
    ROLLING DATABASE MAINTENANCE IN PROGRESS

DGMGRL> show database london

Database - london

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          3 minutes 53 seconds (computed 1 second ago)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    london

  Database Warning(s):
    ORA-16866: database converted to transient logical standby database for rolling database maintenance

Database Status:
WARNING
8. Next is the switchover phase. This is the only time total system outage would occur. (until the applications connect to the standby, refer here for JDBC connection failover to further mitigate the down time). The switchover must be done using the dbms_rolling package and not using DG broker.
SQL> exec DBMS_ROLLING.SWITCHOVER;

PL/SQL procedure successfully completed.
With this the roles of the databases are reversed. The original standby become new primary while original primary become new transient logical standby.
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  london  - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

    colombo - Transient logical standby database

Fast-Start Failover: DISABLED

Configuration Status:
    ROLLING DATABASE MAINTENANCE IN PROGRESS

DGMGRL> show database london

Database - london

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    london

  Database Warning(s):
    ORA-16629: database reports a different protection level from the protection mode

Database Status:
WARNING

DGMGRL> show database colombo

Database - colombo

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      (unknown)
  Apply Lag:          (unknown)
  Average Apply Rate: (unknown)
  Real Time Query:    OFF
  Instance(s):
    colombo

  Database Warning(s):
    ORA-16866: database converted to transient logical standby database for rolling database maintenance

Database Status:
WARNING
At this stage the software versions on databases are as follows
Primary :- GI 18c, DB home 18c and DB running out of 18c home
Standby :- GI 12.2, DB 12.2


9.Upgrade GI on the standby (old primary) to 18c and install the 18c DB software. Copy tnsnames.ora, init file and password file to 18c DB home's relevant directories. Finally remove the datbaase from oracle restart config and add using 18c binaries, specifying 18c home. The database is added a physical standby to Oracle restart even though it's actual role is logical standby. This way when final finish phase is executed, there's no need to make any changes to Oracle restart configuration.
srvctl config database -d colombo
Database unique name: colombo
Database name: colombo
Oracle home: /opt/app/oracle/product/12.2.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/COLOMBO/PARAMETERFILE/spfile.266.981889629
Password file:
Domain: domain.net
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA,FRA
Services: flt,tbx
OSDBA group:
OSOPER group:
Database instance: colombo

srvctl remove database -d colombo
Remove the database colombo? (y/[n]) y

export ORACLE_HOME=/opt/app/oracle/product/18.0.0/dbhome_1
export PATH=$ORACLE_HOME/bin:$PATH

srvctl add database -d colombo -o /opt/app/oracle/product/18.0.0/dbhome_1 -r physical_standby -s mount -p +DATA/COLOMBO/PARAMETERFILE/spfile.266.981889629
srvctl config database -d colombo
Database unique name: colombo
Database name:
Oracle home: /opt/app/oracle/product/18.0.0/dbhome_1
Oracle user: oracle
Spfile: +DATA/COLOMBO/PARAMETERFILE/spfile.266.981889629
Password file:
Domain:
Start options: mount
Stop options: immediate
Database role: PHYSICAL_STANDBY
Management policy: AUTOMATIC
Disk Groups:
Services:
OSDBA group:
OSOPER group:
Database instance: colombo

srvctl start database -d colombo -o mount
Actual role of the standby database
SQL> select open_mode,database_role from v$database;

OPEN_MODE            DATABASE_ROLE
-------------------- ----------------
MOUNTED              LOGICAL STANDBY
If the DG broker shows the standby DB as disable, then enable it.
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  london  - Primary database
    Warning: ORA-16629: database reports a different protection level from the protection mode

    colombo - Physical standby database (disabled)

Fast-Start Failover: DISABLED

Configuration Status:
    ROLLING DATABASE MAINTENANCE IN PROGRESS

DGMGRL> enable database colombo
Enabled.

DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  london  - Primary database
    Error: ORA-16810: multiple errors or warnings detected for the member

    colombo - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
    ROLLING DATABASE MAINTENANCE IN PROGRESS
10. To convert the new logical standby to physical standby, execute the finish plan. This will start redo apply on a branch.
SQL> EXECUTE DBMS_ROLLING.FINISH_PLAN;

PL/SQL procedure successfully completed.
Let the new standby catch up with the redo and query the DG status.
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  london  - Primary database
    colombo - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 25 seconds ago)

DGMGRL> show database london

Database - london

  Role:               PRIMARY
  Intended State:     TRANSPORT-ON
  Instance(s):
    london

Database Status:
SUCCESS

DGMGRL> show database colombo

Database - colombo

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 11.15 MByte/s
  Real Time Query:    OFF
  Instance(s):
    colombo

Database Status:
SUCCESS
11. Finally call the destroy plan.
SQL> exec DBMS_ROLLING.DESTROY_PLAN;

PL/SQL procedure successfully completed.
With this step the upgrading process is complete.

12. The upgrade only resulted in one down time during the switchover process. However, if the database configuration need to be back to it's original roles then another switchover is needed. This could be done using DG broker but would result in a second system outage.
DGMGRL> validate database colombo

  Database Role:     Physical standby database
  Primary Database:  london

  Ready for Switchover:  Yes
  Ready for Failover:    Yes (Primary Running)

  Managed by Clusterware:
    london :  YES
    colombo:  YES

DGMGRL> switchover to colombo
Performing switchover NOW, please wait...
Operation requires a connection to database "colombo"
Connecting ...
Connected to "colombo"
Connected as SYSDBA.
New primary database "colombo" is opening...
Oracle Clusterware is restarting database "london" ...
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to an idle instance.
Connected to "london"
Connected to "london"
Switchover succeeded, new primary is "colombo"

DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  colombo - Primary database
    london  - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 69 seconds ago)
With this last step both databases return to their original roles before the upgrade process.