Sunday, March 18, 2018

Flashback Primary DB in a Data Guard - PDB vs non-CDB vs CDB

This post shows the steps of recovering the standby after a PDB is flashback on the primary DB. Later on shows the same on a non-CDB and CDB. It's assumed that standby DB has flashback on.

Flashback a PDB in the primary CDB
12.2 introduced new features which allows creation of restore points at PDB level and then flashback an individual PDB without impacting the other PDBs or the CDB. Flashback a PDB that's plugged into a CDB in a data guard configuration would cause recovery at the standby to fail and terminate. This is because the standby PDB would have a higher SCN compared to PDB on primary that underwent a flashback. Steps below shows how to recover from this situation.
1. The flashback is done using a restore point. As such first create the restore point inside the PDB. Steps are same even if the restore point is a CDB restore point (i.e. when a restore point created in CDB is used to flashback a PDB)
SQL>  alter session set container=pdbapp1;

SQL> create restore point pdb_restore_point guarantee flashback database;

SQL> select con_id,scn,TIME,name from v$restore_point;

    CON_ID        SCN TIME                           NAME
---------- ---------- ------------------------------ ------------------
         5    5662626 27-FEB-18 15.06.47.000000000   PDB_RESTORE_POINT
2. If a DB is a CDB then application data would reside in a PDB. So it's safe to assume that restore points would be used to recover from user errors at PBD level than CDB level. To simulate some error such as accidental deletion of data from a table. Restore point is used to recover from this error.
sqlplus  asanga/asa@city7:1581/pdbapp1

SQL> select count(*) from a;

  COUNT(*)
----------
       100

SQL> truncate table a;

Table truncated.

SQL> select count(*) from a;

  COUNT(*)
----------
         0
3. Close the PDB and execute the flashback. Once complete open the PDB with resetlog option
SQL> alter session set container=pdbapp1;
Session altered.

SQL> show con_name

CON_NAME
------------------------------
PDBAPP1

SQL> alter pluggable database close;
Pluggable database altered.

SQL>  flashback pluggable database PDBAPP1 to restore point PDB_RESTORE_POINT;
Flashback complete.

SQL> alter pluggable database open resetlogs;
Pluggable database altered.

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         5 PDBAPP1                        READ WRITE NO
4. On the alert log of the primary DB following could be seen during the flashback.
PDBAPP1(5): flashback pluggable database PDBAPP1 to restore point PDB_RESTORE_POINT
2018-02-27T15:12:44.433139+00:00
PDBAPP1(5):Flashback Restore Start
PDBAPP1(5):Restore Flashback Pluggable Database PDBAPP1 (5) until change 5662627
PDBAPP1(5):Flashback Restore Complete
PDBAPP1(5):Flashback Media Recovery Start
2018-02-27T15:12:45.086123+00:00
PDBAPP1(5):Serial Media Recovery started
2018-02-27T15:12:45.233559+00:00
PDBAPP1(5):Recovery of Online Redo Log: Thread 1 Group 8 Seq 257 Reading mem 0
PDBAPP1(5):  Mem# 0: +DATA/PRODCDB/ONLINELOG/group_8.284.968237763
PDBAPP1(5):  Mem# 1: +FRA/PRODCDB/ONLINELOG/group_8.264.968237763
2018-02-27T15:12:45.245468+00:00
PDBAPP1(5):Incomplete Recovery applied until change 5662627 time 02/27/2018 15:06:47
PDBAPP1(5):Flashback Media Recovery Complete
PDBAPP1(5):Flashback Pluggable Database PDBAPP1 (5) recovered until change 5662627
PDBAPP1(5):Completed:  flashback pluggable database PDBAPP1 to restore point PDB_RESTORE_POINT
5. On the standby DB's alert log following could be seen. Note the SCN number mentioned on the standby alert log for the PDB is same as the SCN mentioned in primary. Another point to note is that, flashback of a PDB on primary doesn't affect other PDBs in primary but it impacts the entire standby as recovery is stopped. This means that flashback a single PDB on primary could stop recovery of the entire standby until standby PDB is flashback to a state where it can apply redo.
2018-02-27T16:47:45.999901+00:00
(5):Recovery of pluggable database PDBAPP1 aborted due to pluggable database open resetlog marker.
(5):To continue recovery, restore all data files for this PDB to checkpoint SCN lower than 5662627, or timestamp before 02/27/2018 15:06:47, and restart recovery
MRP0: Background Media Recovery terminated with error 39874
2018-02-27T16:47:46.008223+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_3398.trc:
ORA-39874: Pluggable Database PDBAPP1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 5662627.
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovery stopped due to failure in applying recovery marker (opcode 17.46).
Datafiles are recovered to a consistent state at change 5663609 but controlfile could be ahead of datafiles.
2018-02-27T16:47:46.295066+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_3398.trc:
ORA-39874: Pluggable Database PDBAPP1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 5662627.
2018-02-27T16:47:46.402997+00:00
MRP0: Background Media Recovery process shutdown (stbycdb)
The data guard broker configuration will show an error status and standby database will show redo apply has stopped.
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  prodcdb - Primary database
    stbycdb - Physical standby database
      Error: ORA-16810: multiple errors or warnings detected for the member

Fast-Start Failover: DISABLED

Configuration Status:
ERROR   (status updated 5658 seconds ago)

DGMGRL> show database stbycdb

Database - stbycdb

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          1 minute 9 seconds (computed 0 seconds ago)
  Average Apply Rate: 5.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    stbycdb

  Database Error(s):
    ORA-16766: Redo Apply is stopped

  Database Warning(s):
    ORA-16853: apply lag has exceeded specified threshold

Database Status:
ERROR
6. To recover from this error, flashback the standby PDB to SCN lower than the the primary PDB. In a non-CDB (and in CDB) this value is obtained from the RESETLOGS_CHANGE# column in v$database view. But in CDBs this column is updated only at CDB level. If the CDB doesn't under go a resetlog then this column is not updated with the resetlog_change#. Querying within the PDB also has no effect, as that too will reflect the resetlog_change# shown at the CDB level. In this case the CDB didn't under go any resetlog so querying the v$database inside the PDB shows the following.
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       MOUNTED
         5 PDBAPP1                        MOUNTED

SQL> alter session set container=pdbapp1;
Session altered.

SQL> select resetlogs_change# from v$database;

RESETLOGS_CHANGE#
-----------------
                1
There are two places that could be used to obtain the SCN number to flashback to. First one is the alert log as highlighted above. Second place is INCARNATION_SCN column in the v$pdb_incarnation view in the primary PDB. The Oracle reference doc gives the description for INCARNATION_SCN column as "The SCN to flashback or recover to for this PDB incarnation". During testing it was found that SCN shown on the alert log is same as the one shown in v$pdb_incarnation.INCARNATION_SCN.
SQL> select incarnation_scn from v$pdb_incarnation where status='CURRENT';

INCARNATION_SCN
---------------
        5662627
7. Flashback the PDB at standby to a SCN number that is two less than the one shown in INCARNATION_SCN (INCARNATION_SCN - 2). If the PDB was open for read only then close the PDB and have it in the mount state before the flashback.
SQL> flashback pluggable database to scn 5662625;
Flashback complete.
Check the alert log for any issues. Following could be seen on the standby alert log during PDB flashback
PDBAPP1(5):flashback pluggable database to scn 5662625
2018-02-27T16:51:42.113608+00:00
PDBAPP1(5):Flashback Restore Start
PDBAPP1(5):Restore Flashback Pluggable Database PDBAPP1 (5) until change 5659443
PDBAPP1(5):Flashback Restore Complete
PDBAPP1(5):Completed: flashback pluggable database to scn 5662625
8. Start the redo apply and check the DG broker and standby database status
DGMGRL> edit database stbycdb set state='APPLY-ON';
Succeeded.

DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  prodcdb - Primary database
    stbycdb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 26 seconds ago)

DGMGRL> show database stbycdb

Database - stbycdb

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 1 second ago)
  Apply Lag:          0 seconds (computed 1 second ago)
  Average Apply Rate: 66.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    stbycdb

Database Status:
SUCCESS
This conclude the first part of the post, flashback a PDB in a data guard.
As mentioned earlier the SCN to flashback the PDB at standby could be found at alert log and v$pdb_incarnation. Each time the PDB under goes a resetlog the incarnation goes up. So it's important get the SCN from the current incarnation. Output below is a new PDB (also called PDBAPP1). When the primary PDB is flashback the standby alert log has the following
2018-03-14T18:28:58.891103+00:00
(4):Recovery of pluggable database PDBAPP1 aborted due to pluggable database open resetlog marker.
(4):To continue recovery, restore all data files for this PDB to checkpoint SCN lower than 9555922, or timestamp before 03/14/2018 16:53:35, and restart recovery
MRP0: Background Media Recovery terminated with error 39874
2018-03-14T18:28:58.897371+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_10568.trc:
ORA-39874: Pluggable Database PDBAPP1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 9555922.
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 9556551
2018-03-14T18:28:59.052367+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_10568.trc:
ORA-39874: Pluggable Database PDBAPP1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 9555922.
2018-03-14T18:28:59.154425+00:00
MRP0: Background Media Recovery process shutdown (stbycdb)
On v$pdb_incarnation view at primary
SQL> select * from v$pdb_incarnation;

DB_INCARNATION# PDB_INCARNATION# STATUS  INCARNATION_SCN INCARNATI BEGIN_RESETLOGS_SCN BEGIN_RES END_RESETLOGS_SCN END_RESET PRIOR_DB_INCARNATION# PRIOR_PD FLA     CON_ID
--------------- ---------------- ------- --------------- --------- ------------------- --------- ----------------- --------- --------------------- -------- --- ----------
              1                3 CURRENT         9555922 14-MAR-18             9556548 14-MAR-18           9556548 14-MAR-18                     1 0        YES          4
              1                0 PARENT                1 19-JAN-18                   1 19-JAN-18                 1 19-JAN-18                     0          YES          4
When the PDB is flashback again on primary, the standby alert log and the v$pdb_incarnation view at primary
(4):Recovery of pluggable database PDBAPP1 aborted due to pluggable database open resetlog marker.
(4):To continue recovery, restore all data files for this PDB to checkpoint SCN lower than 9797566, or timestamp before 03/15/2018 10:06:43, and restart recovery
MRP0: Background Media Recovery terminated with error 39874
2018-03-15T11:41:57.572285+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_11330.trc:
ORA-39874: Pluggable Database PDBAPP1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 9797566.
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovery stopped due to failure in applying recovery marker (opcode 17.46).
Datafiles are recovered to a consistent state at change 9797804 but controlfile could be ahead of datafiles.
2018-03-15T11:41:57.850051+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_11330.trc:
ORA-39874: Pluggable Database PDBAPP1 recovery halted
ORA-39873: Restore all data files to a checkpoint SCN lower than 9797566.
2018-03-15T11:41:57.953757+00:00
MRP0: Background Media Recovery process shutdown (stbycdb)

DB_INCARNATION# PDB_INCARNATION# STATUS  INCARNATION_SCN INCARNATI BEGIN_RESETLOGS_SCN BEGIN_RES END_RESETLOGS_SCN END_RESET PRIOR_DB_INCARNATION# PRIOR_PD FLA     CON_ID
--------------- ---------------- ------- --------------- --------- ------------------- --------- ----------------- --------- --------------------- -------- --- ----------
              1                4 CURRENT         9797566 15-MAR-18             9797801 15-MAR-18           9797801 15-MAR-18                     1 3        YES          4
              1                3 PARENT          9555922 14-MAR-18             9556548 14-MAR-18           9556548 14-MAR-18                     1 0        YES          4
              1                0 PARENT                1 19-JAN-18                   1 19-JAN-18                 1 19-JAN-18                     0          YES          4
Flashback PDB Created With standbys=none on Primary
If the PDB was created with standbys=none, then there's nothing to be done on the standby if the PDB goes through a flashback.



Flashback Primary DB (non-CDB)
These steps shows a flashback of a non-CDB primary in a data guard configuration. The database version is 11.2.0.4 and data guard configuration is similar to an earlier post (single instance with Oracle restart).

1. Create a restore point on the primary to be used for the flashback.
SQL>  create restore point first_restore guarantee flashback database;
Restore point created.

SQL> select scn,TIME,name from v$restore_point;

       SCN TIME                                               NAME
---------- -------------------------------------------------- -------------
   6864231 27-FEB-18 03.10.15.000000000 PM                    FIRST_RESTORE
2. Simulate a user error as before
SQL> conn asanga/asa
Connected.
SQL> select count(*) from ins;

  COUNT(*)
----------
     17223

SQL> truncate table ins;
Table truncated.

SQL> select count(*) from ins;

  COUNT(*)
----------
         0
3. Start the primary in mount mode and issue the flashback command. Once flashback is complete open the primary with resetlogs
SQL> shutdown immediate;
SQL> startup mount;

SQL> FLASHBACK DATABASE TO RESTORE POINT FIRST_RESTORE;
Flashback complete.

SQL> alter database open resetlogs;
4. Select the resetlogs_change# from the primary DB
SQL>  select resetlogs_change# from v$database;

RESETLOGS_CHANGE#
-----------------
          6864233
5. Check the user error is corrected
conn asanga/asa
SQL> select count(*) from ins;

  COUNT(*)
----------
     17223
6. At this stage the redo applied on standby would have been stopped. Following could be seen on the standby alert log. Highlighted is the SCN where primary underwent resetlog.
Tue Feb 27 16:08:08 2018
RFS[14]: Assigned to RFS process 3828
RFS[14]: New Archival REDO Branch: 969203936 Current: 967399936
RFS[14]: Selected log 9 for thread 1 sequence 1 dbid 376209786 branch 969203936
A new recovery destination branch has been registered
RFS[14]: Standby in the future of new recovery destinationBranch(resetlogs_id) 969203936
Incomplete Recovery SCN: 6864715
Resetlogs SCN: 6864233
Standby Became Primary SCN: 4218629
Flashback database to SCN 4218629 to follow new branch
Flashback database to SCN 4218629 to follow new branch
RFS[14]: New Archival REDO Branch(resetlogs_id): 969203936  Prior: 967399936
RFS[14]: Archival Activation ID: 0x16b5779b Current: 0x169a808c
RFS[14]: Effect of primary database OPEN RESETLOGS
RFS[14]: Managed Standby Recovery process is active
RFS[14]: Incarnation entry added for Branch(resetlogs_id): 969203936 (stdby)
...
Tue Feb 27 16:08:09 2018
MRP0: Incarnation has changed! Retry recovery...
Errors in file /opt/app/oracle/diag/rdbms/stdby/stdby/trace/stdby_pr00_19722.trc:
ORA-19906: recovery target incarnation changed during recovery
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
...
Tue Feb 27 16:08:10 2018
Managed Standby Recovery starting Real Time Apply
Warning: Recovery target destination is in a sibling branch
of the controlfile checkpoint. Recovery will only recover
changes to datafiles.
Datafile 1 (ckpscn 6864715) is orphaned on incarnation#=2
MRP0: Detected orphaned datafiles!
Recovery will possibly be retried after flashback...
Errors in file /opt/app/oracle/diag/rdbms/stdby/stdby/trace/stdby_pr00_3859.trc:
ORA-19909: datafile 1 belongs to an orphan incarnation
ORA-01110: data file 1: '+DATA/stdby/datafile/system.258.964883351'
Managed Standby Recovery not using Real Time Apply
Recovery Slave PR00 previously exited with exception 19909
Archived Log entry 4437 added for thread 1 sequence 175 rlc 967399936 ID 0x169a808c dest 3:
Changing standby controlfile to MAXIMUM AVAILABILITY level
RFS[15]: Selected log 10 for thread 1 sequence 4 dbid 376209786 branch 969203936
Tue Feb 27 16:08:12 2018
Archived Log entry 4438 added for thread 1 sequence 3 ID 0x16b5779b dest 1:
Tue Feb 27 16:08:31 2018
MRP0: Background Media Recovery process shutdown (stdby)
7. The data guard broker would show the following status
DGMGRL> show configuration

Configuration - fz_db_dg

  Protection Mode: MaxAvailability
  Databases:
    prod  - Primary database
    stdby - Physical standby database
      Error: ORA-16810: multiple errors or warnings detected for the database

Fast-Start Failover: DISABLED

Configuration Status:
ERROR

DGMGRL>  show database stdby

Database - stdby

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   (unknown)
  Apply Lag:       0 seconds (computed 0 seconds ago)
  Apply Rate:      137.00 KByte/s
  Real Time Query: OFF
  Instance(s):
    stdby

  Database Error(s):
    ORA-16700: the standby database has diverged from the primary database
    ORA-16766: Redo Apply is stopped

Database Status:
ERROR
8. Flashback the standby to the SCN that is (RESETLOGS_CHANGE# -2 ). If the standby was open in read only mode then close and start it in mount mode.
SQL> flashback database to scn 6864231;
Flashback complete.
On standby alert log following could be seen
Tue Feb 27 16:11:46 2018
Media Recovery Log +FRA/stdby/archivelog/2018_02_27/thread_1_seq_175.459.969206891
Incomplete Recovery applied until change 6864231 time 02/27/2018 15:10:15
Flashback Media Recovery Complete
Setting recovery target incarnation to 3
Completed: flashback database to scn 6864230
9. Enable redo apply and check the data guard broker status
DGMGRL> edit database stdby set state='APPLY-ON';
Succeeded.

DGMGRL> show configuration

Configuration - fz_db_dg

  Protection Mode: MaxAvailability
  Databases:
    prod  - Primary database
    stdby - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS

DGMGRL> show database stdby ;

Database - stdby

  Role:            PHYSICAL STANDBY
  Intended State:  APPLY-ON
  Transport Lag:   0 seconds (computed 0 seconds ago)
  Apply Lag:       0 seconds (computed 0 seconds ago)
  Apply Rate:      0 Byte/s
  Real Time Query: OFF
  Instance(s):
    stdby

Database Status:
SUCCESS

Flashback Primary DB (CDB)
When the primary CDB is flashback all PDBs pluged to it also under go the flashback. Steps are similar to that of non-CDB except for few key considerations specific to CDB. These are mentioned at the end of this section.
1. Currently CDB has two PDBs plugged
SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       READ ONLY  NO
         3 PDBDEV                         READ WRITE NO
         4 PDBAPP1                        READ WRITE NO
2. Create a CDB restore point
SQL> create restore point first_restore guarantee flashback database;

Restore point created.

SQL> select scn,TIME,name,PDB_RESTORE_POINT from  v$restore_point;

       SCN TIME                           NAME                 PDB
---------- ------------------------------ -------------------- ---
  10050579 15-MAR-18 16.04.22.000000000   FIRST_RESTORE        NO
3. Similar to before simulate a failure to test the flashback.
4. Put the primary CDB into mount mode and issue the flashback command
startup mount;

SQL> show pdbs

    CON_ID CON_NAME                       OPEN MODE  RESTRICTED
---------- ------------------------------ ---------- ----------
         2 PDB$SEED                       MOUNTED
         3 PDBDEV                         MOUNTED
         4 PDBAPP1                        MOUNTED

SQL> FLASHBACK DATABASE TO RESTORE POINT FIRST_RESTORE;

Flashback complete.

SQL> alter database open resetlogs;
4. Get the resetlogs_change# from primary CDB after opening
SQL> select resetlogs_change# from v$database;

RESETLOGS_CHANGE#
-----------------
         10050581
5. Similar to previous cases the redo apply on standby will stop. Following could be seen on alert log.
A new recovery destination branch has been registered
RFS[7]: Standby in the future of new recovery destinationBranch(resetlogs_id) 970848377
Incomplete Recovery SCN: 10050343
Resetlogs SCN: 10050581

Standby Became Primary SCN: 10049187

Flashback database to SCN 10049187 to follow new branch
Flashback database to SCN 10049187 to follow new branch
RFS[7]: New Archival REDO Branch(resetlogs_id): 970848377  Prior: 970844177
RFS[7]: Archival Activation ID: 0xb0f552e2 Current: 0xb0f5fd63
RFS[7]: Effect of primary database OPEN RESETLOGS
RFS[7]: Managed Standby Recovery process is active
2018-03-15T17:40:26.153911+00:00
RFS[7]: Incarnation entry added for Branch(resetlogs_id): 970848377 (stbycdb)
2018-03-15T17:40:26.170006+00:00
Setting recovery target incarnation to 3
2018-03-15T17:40:26.220855+00:00
MRP0: Incarnation has changed! Retry recovery...
2018-03-15T17:40:26.228354+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_3093.trc:
ORA-19906: recovery target incarnation changed during recovery
Managed Standby Recovery not using Real Time Apply
Recovery interrupted!
Recovered data files to a consistent state at change 10050799
2018-03-15T17:40:26.525958+00:00
...
2018-03-15T17:40:32.758453+00:00
Changing standby controlfile to MAXIMUM AVAILABILITY level
RFS[6]: Selected log 9 for T-1.S-4 dbid 2963914998 branch 970848377
2018-03-15T17:40:32.943141+00:00
Archived Log entry 36 added for T-1.S-3 ID 0xb0f552e2 LAD:1
2018-03-15T17:40:48.131861+00:00
MRP0: Background Media Recovery process shutdown (stbycdb)
6. Data guard broker shows error status
DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  prodcdb - Primary database
    stbycdb - Physical standby database
      Error: ORA-16810: multiple errors or warnings detected for the member

Fast-Start Failover: DISABLED

Configuration Status:
ERROR   (status updated 5670 seconds ago)

DGMGRL> show database stbycdb

Database - stbycdb

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      (unknown)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 7.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    stbycdb

  Database Error(s):
    ORA-16700: the standby database has diverged from the primary database
    ORA-16766: Redo Apply is stopped

  Database Warning(s):
    ORA-16856: transport lag could not be determined

Database Status:
ERROR
7. To resolve the issue start standby CDB in mount mode and flashback to SCN value of (RESETLOGS_CHANGE# - 2)
SQL> flashback database to scn 10050579;
Flashback complete.
Monitor the alert log for any issues. In this case flashback was able to go across creation and dropping of a tablespace
2018-03-15T17:44:41.398074+00:00
flashback database to scn 10050579
2018-03-15T17:44:41.879824+00:00
Flashback Restore Start
Flashback: created tablespace #5: 'TEST' of pdb #4 with key index #-546742896 in the controlfile.
Flashback: created OFFLINE file 'UNNAMED00152' for tablespace #5 of pdb #4 in the controlfile.
Filename was:
'+DATA/STBYCDB/67632B96894E2116E0535500A8C05DA5/DATAFILE/test.268.970853853' when dropped.
File will have to be restored from a backup and recovered.
2018-03-15T17:44:42.444166+00:00
Flashback: deleted datafile #152 in tablespace #5 of pdb #4 from control file.
Flashback: dropped tablespace #5: 'TEST' of pdb #4 from the control file.
Flashback Restore Complete
Flashback Media Recovery Start
2018-03-15T17:44:42.582223+00:00
Setting recovery target incarnation to 2
2018-03-15T17:44:42.629691+00:00
 Started logmerger process
2018-03-15T17:44:43.352732+00:00
Parallel Media Recovery started with 4 slaves
2018-03-15T17:44:43.577325+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_15/thread_1_seq_30.281.970854029
2018-03-15T17:44:43.778830+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_15/thread_1_seq_31.326.970854029
2018-03-15T17:44:43.939967+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_15/thread_1_seq_32.327.970854029
(4):Flashback recovery: Added file #152 to control file as OFFLINE and 'UNNAMED00152'
(4):because it was dropped during the flashback interval
(4):or it was added during flashback media recovery.
(4):File was originally created as:
(4):'+DATA/PRODCDB/67632B96894E2116E0535500A8C05DA5/DATAFILE/test.282.970848211'
(4):File will have to be restored from a backup or
(4):recreated using ALTER DATABASE CREATE DATAFILE command,
(4):and the file has to be onlined and recovered.
(4):Recovery deleting file #152:'/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00152' from controlfile.
(4):Recovery dropped tablespace 'TEST'
2018-03-15T17:44:44.498297+00:00
Incomplete Recovery applied until change 10050580 time 03/15/2018 16:04:22
2018-03-15T17:44:44.504472+00:00
Flashback Media Recovery Complete
2018-03-15T17:44:44.945998+00:00
Setting recovery target incarnation to 3
Completed: flashback database to scn 10050579
8. Start the redo apply and verify DG broker change
DGMGRL> edit database stbycdb set state='apply-on';
Succeeded.

DGMGRL> show configuration

Configuration - dg12c2

  Protection Mode: MaxAvailability
  Members:
  prodcdb - Primary database
    stbycdb - Physical standby database

Fast-Start Failover: DISABLED

Configuration Status:
SUCCESS   (status updated 5684 seconds ago)

DGMGRL> show database stbycdb

Database - stbycdb

  Role:               PHYSICAL STANDBY
  Intended State:     APPLY-ON
  Transport Lag:      0 seconds (computed 0 seconds ago)
  Apply Lag:          0 seconds (computed 0 seconds ago)
  Average Apply Rate: 33.00 KByte/s
  Real Time Query:    OFF
  Instance(s):
    stbycdb

Database Status:
SUCCESS

Primary CDB has PDB created with standbys=none
On 12.2 PDBs could be created only on primary DB without datafiles of the PDB being created on the standby. On standby the PDB will have recovery status disabled
SQL> select name,recovery_status from v$pdbs;

NAME       RECOVERY
---------- --------
PDB$SEED   ENABLED
PDBAPP1    ENABLED
PDBDEV     DISABLED

SQL> alter session set container=pdbdev;

SQL> select file#,name from v$datafile;

     FILE# NAME
---------- --------------------------------------------------------------------------------
       183 /opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00183
       184 /opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00184
       185 /opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00185
       186 /opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00186
The steps to flashback the CDB and then to recover the standby CDB is same as with non-CDB.

1. Create a restore point and put the CDB into mount mode and flashback the database followed by open resetlogs.
SQL> create restore point first_restore guarantee flashback database;
Restore point created.


SQL> select scn,TIME,name,PDB_RESTORE_POINT from  v$restore_point;

       SCN TIME                           NAME                           PDB
---------- ------------------------------ ------------------------------ ---
  10537244 16-MAR-18 13.22.44.000000000   FIRST_RESTORE                  NO

SQL> startup force mount;

SQL> SQL> FLASHBACK DATABASE TO RESTORE POINT FIRST_RESTORE;
Flashback complete.

SQL> alter database open resetlogs;
Database altered.
2. Standby redo apply will stop due to diverging from the primary. The unanemd files mentioned are the files of the PDB with disabled recovery.
2018-03-16T14:59:57.788003+00:00
Deleted Oracle managed file +FRA/STBYCDB/ARCHIVELOG/2018_03_16/thread_0_seq_0.314.970930797
2018-03-16T14:59:58.485425+00:00
RFS[5]: Assigned to RFS process (PID:21851)
RFS[5]: New Archival REDO Branch: 970925153 Current: 970913416
Primary database is in MAXIMUM AVAILABILITY mode
Changing standby controlfile to RESYNCHRONIZATION level
Standby controlfile consistent with primary
RFS[5]: Selected log 9 for T-1.S-3 dbid 2963914998 branch 970925153
2018-03-16T14:59:59.108930+00:00
RFS[6]: Assigned to RFS process (PID:21854)
RFS[6]: Selected log 10 for T-1.S-2 dbid 2963914998 branch 970925153
A new recovery destination branch has been registered
RFS[6]: Standby in the future of new recovery destinationBranch(resetlogs_id) 970925153
Incomplete Recovery SCN: 10537681
Resetlogs SCN: 10537246

Standby Became Primary SCN: 10516843

Flashback database to SCN 10516843 to follow new branch
Flashback database to SCN 10516843 to follow new branch
...
2018-03-16T15:00:03.991683+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_m000_21894.trc:
ORA-01110: data file 183: '/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00183'
ORA-01565: error in identifying file '/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00183'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2018-03-16T15:00:04.365763+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_m000_21894.trc:
ORA-01110: data file 184: '/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00184'
ORA-01565: error in identifying file '/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00184'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2018-03-16T15:00:04.805232+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_m000_21894.trc:
ORA-01110: data file 185: '/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00185'
ORA-01565: error in identifying file '/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00185'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
2018-03-16T15:00:05.209166+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_m000_21894.trc:
ORA-01110: data file 186: '/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00186'
ORA-01565: error in identifying file '/opt/app/oracle/product/12.2.0/dbhome_1/dbs/UNNAMED00186'
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 2: No such file or directory
Additional information: 7
Checker run found 8 new persistent data failures
...
MRP0: Background Media Recovery process shutdown (stbycdb)
3. Get the resetlogs_change# from primary
SQL> select resetlogs_change# from v$database;

RESETLOGS_CHANGE#
-----------------
         10537246
and flashback the standby DB (same as with a CDB).
SQL> flashback database to scn 10537244;
Flashback complete.
During the flashback warnings will be issued for offline data files which could be ignored.
2018-03-16T15:03:49.487890+00:00
flashback database to scn 10537244
2018-03-16T15:03:49.840095+00:00
Flashback Restore Start
Flashback Restore Complete
Flashback Media Recovery Start
2018-03-16T15:03:50.286960+00:00
Setting recovery target incarnation to 4
2018-03-16T15:03:50.341298+00:00
 Started logmerger process
2018-03-16T15:03:51.035344+00:00
Parallel Media Recovery started with 4 slaves
2018-03-16T15:03:51.102859+00:00
Warning: Datafile 183 (+DATA/PRODCDB/6787CB93C0372211E0535500A8C052D4/DATAFILE/system.276.970921923) is offline during full database recovery and will not be recovered
Warning: Datafile 184 (+DATA/PRODCDB/6787CB93C0372211E0535500A8C052D4/DATAFILE/sysaux.275.970921923) is offline during full database recovery and will not be recovered
Warning: Datafile 185 (+DATA/PRODCDB/6787CB93C0372211E0535500A8C052D4/DATAFILE/undotbs1.277.970921923) is offline during full database recovery and will not be recovered
2018-03-16T15:03:51.270043+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_16/thread_1_seq_45.317.970930803
2018-03-16T15:03:51.510617+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_16/thread_1_seq_46.330.970930807
2018-03-16T15:03:51.688125+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_16/thread_1_seq_47.321.970930807
2018-03-16T15:03:51.840959+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_16/thread_1_seq_48.287.970930807
2018-03-16T15:03:51.987769+00:00
Incomplete Recovery applied until change 10537245 time 03/16/2018 13:22:45
2018-03-16T15:03:51.995117+00:00
Flashback Media Recovery Complete
2018-03-16T15:03:52.385714+00:00
Setting recovery target incarnation to 5
2018-03-16T15:03:52.565357+00:00
Completed: flashback database to scn 10537244
4. Finally enable redo apply.
DGMGRL> edit database stbycdb set state='apply-on';
Succeeded.
On standby alert log
2018-03-16T15:04:40.487316+00:00
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
2018-03-16T15:04:40.495896+00:00
Attempt to start background Managed Standby Recovery process (stbycdb)
Starting background process MRP0
2018-03-16T15:04:40.548175+00:00
MRP0 started with pid=75, OS id=22252
2018-03-16T15:04:40.553249+00:00
MRP0: Background Managed Standby Recovery process started (stbycdb)
2018-03-16T15:04:45.608480+00:00
 Started logmerger process
2018-03-16T15:04:45.819166+00:00
Managed Standby Recovery starting Real Time Apply
2018-03-16T15:04:46.797164+00:00
Parallel Media Recovery started with 4 slaves
2018-03-16T15:04:46.866458+00:00
Warning: Datafile 183 (+DATA/PRODCDB/6787CB93C0372211E0535500A8C052D4/DATAFILE/system.276.970921923) is offline during full database recovery and will not be recovered
Warning: Datafile 184 (+DATA/PRODCDB/6787CB93C0372211E0535500A8C052D4/DATAFILE/sysaux.275.970921923) is offline during full database recovery and will not be recovered
Warning: Datafile 185 (+DATA/PRODCDB/6787CB93C0372211E0535500A8C052D4/DATAFILE/undotbs1.277.970921923) is offline during full database recovery and will not be recovered
2018-03-16T15:04:46.956819+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_16/thread_1_seq_1.278.970930809
2018-03-16T15:04:47.118435+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_16/thread_1_seq_2.314.970930803
2018-03-16T15:04:47.560091+00:00
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
2018-03-16T15:04:48.937435+00:00
Media Recovery Log +FRA/STBYCDB/ARCHIVELOG/2018_03_16/thread_1_seq_3.331.970930811
2018-03-16T15:04:52.905396+00:00
Media Recovery Waiting for thread 1 sequence 4 (in transit)
2018-03-16T15:04:52.920078+00:00
Recovery of Online Redo Log: Thread 1 Group 10 Seq 4 Reading mem 0
  Mem# 0: +DATA/STBYCDB/ONLINELOG/group_10.283.970922707
  Mem# 1: +FRA/STBYCDB/ONLINELOG/group_10.324.970922711

Issues noticed
This test (CDB flashback when a PDB is having recovery disabled) was done fwe times. At one time the recovery start failed on standby after the flashback. The issue was case an ora-600 error.
2018-03-15T16:34:18.223019+00:00
Managed Standby Recovery starting Real Time Apply2018-03-15T16:34:19.670057+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_28192.trc  (incident=34209) (PDBNAME=CDB$ROOT):
ORA-00600: internal error code, arguments: [kcvaor_pdb_3], [148], [0], [0], [1], [0], [], [], [], [], [], []
Incident details in: /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/incident/incdir_34209/stbycdb_pr00_28192_i34209.trc
2018-03-15T16:34:28.897393+00:00
Use ADRCI or Support Workbench to package the incident.
See Note 411.1 at My Oracle Support for error and packaging details.
2018-03-15T16:34:28.899991+00:00
MRP0: Background Media Recovery terminated with error 600
2018-03-15T16:34:28.901477+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_pr00_28192.trc:
ORA-00600: internal error code, arguments: [kcvaor_pdb_3], [148], [0], [0], [1], [0], [], [], [], [], [], []
Managed Standby Recovery not using Real Time Apply
2018-03-15T16:34:28.952661+00:00
Completed: ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT  NODELAY
2018-03-15T16:34:29.181019+00:00
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_mrp0_28186.trc  (incident=34201) (PDBNAME=CDB$ROOT):
ORA-00600: internal error code, arguments: [kcvaor_pdb_3], [148], [0], [0], [1], [0], [], [], [], [], [], []
Incident details in: /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/incident/incdir_34201/stbycdb_mrp0_28186_i34201.trc
...
Errors in file /opt/app/oracle/diag/rdbms/stbycdb/stbycdb/trace/stbycdb_mrp0_28186.trc:
ORA-00600: internal error code, arguments: [kcvaor_pdb_3], [148], [0], [0], [1], [0], [], [], [], [], [], []
2018-03-15T16:34:32.201983+00:00
MRP0: Background Media Recovery process shutdown (stbycdb)
The trace file suggest issue is related to incarnation but none of the usual remedies worked.
cat stbycdb_pr00_28192.trc
Media Recovery apply resetlogs offline range for datafile 1, incarnation : 0
Media Recovery apply resetlogs offline range for datafile 3, incarnation : 0
Media Recovery apply resetlogs offline range for datafile 5, incarnation : 0
Media Recovery apply resetlogs offline range for datafile 7, incarnation : 0
Media Recovery apply resetlogs offline range for datafile 121, incarnation : 0
ora-600 are internal Oracle internal errors and ora-600 error look up tool didn't have any information on this (153788.1).
Other time the flashback on the standby would fail due to datafiles belonging to orphan incarnations.
RMAN> flashback database to scn 10545305;

Starting flashback at 16-MAR-18
using channel ORA_DISK_1
starting media recovery
media recovery failed
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of flashback command at 03/16/2018 15:49:16
ORA-00283: recovery session canceled due to errors
ORA-38770: FLASHBACK DATABASE failed during recovery.
ORA-19909: datafile 148 belongs to an orphan incarnation
ORA-01110: data file 148: '+DATA/STBYCDB/67632B96894E2116E0535500A8C05DA5/DATAFILE/system.282.970920351'
Refer MOS 1509932.1 for resolving such cases.