Sequential Reads – Full Table scans

Posting my observations on non regular situations where we observe sequential reads.

Sequential reads are generally expected when performing index scans and table segment header reads.

Below are the instances where I noticed sequential reads which are often not associated with.

First case:

  1.  We were asked to check on a slow running sql , I noticed significant time being spent on IO and predominantly sequential reads, the access path (using sql monitor)where the reads are from is full table scan. We usually associate scattered reads for full table scan access paths.
  2. On further checking the table structure , we could see it having > 255 columns.
  3. Tables with column > 255 will be row chained and the scans on that object will be sequential since the rows will be chained intra block (also if row doesn’t fit in one block which is inter block chaining).
  4. You can verify the stats on the sql/session (use tanel snapper) you can notice “table fetched continued row” stats.

Second case:

  1. Similar issue of slow running sql , noticed sequential reads again for full scan operation.
  2. Note that sequential and scattered reads both will be there but significant read operations were sequential.
  3. Got the session details and waits on the operation from real time sql monitoring.
  4. Stats for the session shows major information regarding “rollback changes – undo records applied”
  5. The stats is related to providing read consistent version of the blocks.
  6. After the long running read operation started , there were DML changes happened on the concerned table and since read operation should view data as of timestamp it got triggered it will apply undo which will be single block read through sequential reads and provide consistent image of the blocks.

Third Case:

  1. Full scan operations involving tables with clob columns will also show predominant sequential reads.

 

 

Select Query Failing with ORA-07445

Issue:

Oracle DB: 12.1.0.2

While running one select query it was failing with below exception.

ORA-10173: Dynamic Sampling time-out error

ORA-07445: exception encountered: core dump [qcdlgcd()+70] [SIGSEGV] [ADDR:0x4] [PC:0xCDFB0E6] [Address not mapped to object] []

Troubleshoot:

Seeing the ORA-10173, I initially thought it might be the issue with dynamic sampling triggered by sql plan directive, I have gathered stats on the table which didn’t work.

Later disabled dynamic sampling at the session level to verify , query again failed with ORA-07445

Resolution:

Later came across oracle MOS note 2028777.1 mentioning to set the hidden parameter to false.

alter session set “_index_join_enabled”=false;

My symptoms for the issue are same after enabling the 10053 trace and checking the _index_join_enabled parameter.

References:

ORA-07445: core dump [qcdlgcd()+70] for Union All Statement : No More Data To Read From Socket (Doc ID 2028777.1)

ORA-10173: Dynamic Sampling time-out error in alert.log (Doc ID 1491818.1)

 

 

 

 

Duplicate Fails with ORA-17627: ORA-01041

Issue:

While performing active duplication of 5TB database from non-ASM to ASM , duplication failed after restoring 2TB with below errors

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of Duplicate Db command at 12/06/2017 07:23:21
RMAN-05501: aborting duplication of target database
RMAN-03015: error occurred in stored script Memory Script

ORA-17627: ORA-01041: internal error. hostdef extension doesn’t exist
ORA-19849: error while reading backup piece from service perfsp3db
ORA-03113: end-of-file on communication channel
ORA-19558: error de-allocating device
ORA-19557: device error, device type: DISK, device name:
ORA-17627: ORA-01041: internal error. hostdef extension doesn’t exist

Solution:

As per MOS note  2230068.1 , it is an unpublished bug. Setting DISABLE_OOB=ON in sqlnet.ora fixed the issue.

PS: Restore optimization in RMAN does help if you re run the duplicate , it will skip the already restored datafiles, but in ASM/OMF managed files duplicate will restore the datafiles again. I had to manually remove the datafiles before triggering the duplication.

 

Dropped Tablespace Recovery – Flashback Option

Wanted to test one scenario where lot of blog posts mentioned to recover dropped tablespace using Flashback, it only works if we have all the archivelogs/flashback logs from the tablespace creation time, otherwise we can’t get back the dropped tablespace.

Recovering Tablespace with having archivelogs/flashback logs from the Tablespace creation time:

SQL> create tablespace test datafile '+DATA' size 100M;

Tablespace created.

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
 5711390

SQL> drop tablespace test including contents and datafiles;

Tablespace dropped.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size 2923728 bytes
Variable Size 536871728 bytes
Database Buffers 637534208 bytes
Redo Buffers 13852672 bytes
Database mounted.

SQL> flashback database to scn 5711390;

Flashback complete.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/TEST/DATAFILE/system.278.955964033
+DATA/TEST/DATAFILE/sysaux.277.955963991
+DATA/TEST/DATAFILE/undotbs1.280.955964079
+DATA/TEST/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/system.286.955964133
+DATA/TEST/DATAFILE/users.279.955964079
+DATA/TEST/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/sysaux.285.955964133
+DATA/TEST/DATAFILE/undotbs2.288.955964403
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/system.292.955964601
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/sysaux.293.955964601
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/users.295.955964615
/scratch/oracle/app/oracle/product/12.1.0/dbhome_1/dbs/UNNAMED00012

11 rows selected.

SQL> alter database create datafile '/scratch/oracle/app/oracle/product/12.1.0/dbhome_1/dbs/UNNAMED00012' as '+DATA';

Database altered.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/TEST/DATAFILE/system.278.955964033
+DATA/TEST/DATAFILE/sysaux.277.955963991
+DATA/TEST/DATAFILE/undotbs1.280.955964079
+DATA/TEST/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/system.286.955964133
+DATA/TEST/DATAFILE/users.279.955964079
+DATA/TEST/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/sysaux.285.955964133
+DATA/TEST/DATAFILE/undotbs2.288.955964403
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/system.292.955964601
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/sysaux.293.955964601
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/users.295.955964615
+DATA/TEST/DATAFILE/test.307.958560911

11 rows selected.

SQL> alter database open resetlogs;

Database altered.

Testing Flashback with noarchivelogs from the tablespace creation time:

SQL> select current_scn from v$database;

CURRENT_SCN
-----------
 5714396

SQL> drop tablespace users including contents and datafiles;

Tablespace dropped.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/TEST/DATAFILE/system.278.955964033
+DATA/TEST/DATAFILE/sysaux.277.955963991
+DATA/TEST/DATAFILE/undotbs1.280.955964079
+DATA/TEST/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/system.286.955964133
+DATA/TEST/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/sysaux.285.955964133
+DATA/TEST/DATAFILE/undotbs2.288.955964403
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/system.292.955964601
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/sysaux.293.955964601
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/users.295.955964615
+DATA/TEST/DATAFILE/test.307.958560911

10 rows selected.

SQL> startup mount
ORACLE instance started.

Total System Global Area 1191182336 bytes
Fixed Size 2923728 bytes
Variable Size 536871728 bytes
Database Buffers 637534208 bytes
Redo Buffers 13852672 bytes
Database mounted.

SQL> flashback database to scn 5714396;

Flashback complete.

SQL> select name from v$datafile;

NAME
--------------------------------------------------------------------------------
+DATA/TEST/DATAFILE/system.278.955964033
+DATA/TEST/DATAFILE/sysaux.277.955963991
+DATA/TEST/DATAFILE/undotbs1.280.955964079
+DATA/TEST/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/system.286.955964133
/scratch/oracle/app/oracle/product/12.1.0/dbhome_1/dbs/UNNAMED00006
+DATA/TEST/FD9AC20F64D244D7E043B6A9E80A2F2F/DATAFILE/sysaux.285.955964133
+DATA/TEST/DATAFILE/undotbs2.288.955964403
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/system.292.955964601
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/sysaux.293.955964601
+DATA/TEST/5A51DA5618407A1CE053D6A4580A696A/DATAFILE/users.295.955964615
+DATA/TEST/DATAFILE/test.307.958560911

11 rows selected.

SQL> alter database create datafile '/scratch/oracle/app/oracle/product/12.1.0/dbhome_1/dbs/UNNAMED00006' as '+DATA';
alter database create datafile '/scratch/oracle/app/oracle/product/12.1.0/dbhome_1/dbs/UNNAMED00006' as '+DATA'
*
ERROR at line 1:
ORA-01178: file 6 created before last CREATE CONTROLFILE, cannot recreate
ORA-01111: name for data file 6 is unknown - rename to correct file
ORA-01110: data file 6:
'/scratch/oracle/app/oracle/product/12.1.0/dbhome_1/dbs/UNNAMED00006'

Here oracle doesn’t get information when the datafile was created and the name , so it can’t be recovered using flashback database. We have to perform TSPITR to get back the dropped tablespace.

 

Alter Database Datafile Offline Drop And Alter Tablespace Offline

https://community.oracle.com/thread/4067842

Recent question on oracle community on user dropping a datafile using offline drop made me to recall what happens during datafile offline ,tablespace offline and tablespace read only modes.

To start with  concepts of dropping, oracle do not allow dropping of a datafile in a tablespace(except >10.2 version where empty files can be dropped), since the extents of segments in tablespace will be spread across datafiles ,so it is expected not to allow datafile drop.

If you want to exclude datafile  for any reason, in noarchivelog mode you have to use offline drop clause, the datafile will continue to exist in dictionary and controlfile , it just marks in controfile that this datafile not needed to verify SCN during DB startup.

In archivelog mode, oracle ignores drop , it will be similar to keeping datafile offline.

Now the testing of checkpoint_change# when datafile and tablespace are taken offline/readonly.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31312696 SYSTEM
/u01/ORCL/SYSAUX.dbf 31312696 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31312696 ONLINE
/u01/ORCL/tools01.dbf 31312696 ONLINE
/u01/ORCL/ora01.dbf 31312696 ONLINE
/u01/ORCL/xmldbts01.dbf 31312696 ONLINE
/u01/ORCL/test1.dbf 31315480 ONLINE
/u01/ORCL/test2.dbf 31315508 ONLINE

8 rows selected.

SYS@ORCL>alter system checkpoint;

System altered.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316163 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316163 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316163 ONLINE
/u01/ORCL/tools01.dbf 31316163 ONLINE
/u01/ORCL/ora01.dbf 31316163 ONLINE
/u01/ORCL/xmldbts01.dbf 31316163 ONLINE
/u01/ORCL/test1.dbf 31316163 ONLINE
/u01/ORCL/test2.dbf 31316163 ONLINE

8 rows selected.

 

Now all datafiles have same checkpoint SCN.

I will take one datafile from tablespace TEST offline, here as DB in archivelog mode, even offline drop will behave same.

SYS@ORCL>alter database datafile '/u01/ORCL/test2.dbf' offline drop;

Database altered.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316163 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316163 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316163 ONLINE
/u01/ORCL/tools01.dbf 31316163 ONLINE
/u01/ORCL/ora01.dbf 31316163 ONLINE
/u01/ORCL/xmldbts01.dbf 31316163 ONLINE
/u01/ORCL/test1.dbf 31316163 ONLINE
/u01/ORCL/test2.dbf 31316163 31316201 RECOVER

8 rows selected.

v$datafile_header will show the status, FYI v$datafile_header as the name of dynamic view suggests, it actually what is there in actual datafile header,v$datafile is
controlfile information.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,STATUS,RECOVER from v$datafile_header;

NAME CHECKPOINT_CHANGE# STATUS REC
-------------------------------------------------- ------------------ ------- ---
/u01/ORCL/system01.dbf 31316163 ONLINE NO
/u01/ORCL/SYSAUX.dbf 31316163 ONLINE NO
/u01/ORCL/UNDOTBS1.dbf 31316163 ONLINE NO
/u01/ORCL/tools01.dbf 31316163 ONLINE NO
/u01/ORCL/ora01.dbf 31316163 ONLINE NO
/u01/ORCL/xmldbts01.dbf 31316163 ONLINE NO
/u01/ORCL/test1.dbf 31316163 ONLINE NO
/u01/ORCL/test2.dbf 31316163 OFFLINE YES

8 rows selected.

Datafile offline will not perform any file/tabespace checkpoint.

v$datafile.last_change# is stop scn , it is the SCN number up to which there is redo information available for a datafile as per controlfile.

In order to bring the datafile back online, we need to recover it , but till what point?
We only need the redologs which have the checkpoint information of offlined datafile , we do not need any archivelogs after that.

In above case we only need the redolog which has the redo information at SCN 31316201 which is in sequence# 642.

Even if we lose the other redologs , we can bring the datafile online.

SYS@ORCL>select sequence#, FIRST_CHANGE#,to_Char(NEXT_CHANGE#) from v$archived_log;

SEQUENCE# FIRST_CHANGE# TO_CHAR(NEXT_CHANGE#)
---------- ------------- ----------------------------------------
 641 31303644 31312696
 642 31312696 31316487
 643 31316487 31316490
 644 31316490 31316493
 645 31316493 31316496
 646 31316496 31316499

I am only recovering until last_change# scn and bring the datafile online.

RMAN> recover datafile 8 until scn 31316202;

Starting recover at 05-AUG-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=87 device type=DISK

starting media recovery

archived log for thread 1 with sequence 642 is already on disk as file 
/u01/ORCL/flash_recovery_area/ORCL/archivelog/2017_08_05/o1_mf_1_642_drcl59k8_.arc
archived log file name=/u01/ORCL/flash_recovery_area/ORCL/archivelog/2017_08_05/o1_mf_1_642_drcl59k8_.arc thread=1 sequence=642
media recovery complete, elapsed time: 00:00:00
Finished recover at 05-AUG-17

 

SYS@ORCL>alter database datafile '/u01/ORCL/test2.dbf' online;

Database altered.

Last_change# will be set to null once it is back online.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316499 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316499 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316499 ONLINE
/u01/ORCL/tools01.dbf 31316499 ONLINE
/u01/ORCL/ora01.dbf 31316499 ONLINE
/u01/ORCL/xmldbts01.dbf 31316499 ONLINE
/u01/ORCL/test1.dbf 31316499 ONLINE
/u01/ORCL/test2.dbf 31316646 ONLINE

8 rows selected.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,STATUS,RECOVER from v$datafile_header;

NAME CHECKPOINT_CHANGE# STATUS REC
-------------------------------------------------- ------------------ ------- ---
/u01/ORCL/system01.dbf 31316499 ONLINE NO
/u01/ORCL/SYSAUX.dbf 31316499 ONLINE NO
/u01/ORCL/UNDOTBS1.dbf 31316499 ONLINE NO
/u01/ORCL/tools01.dbf 31316499 ONLINE NO
/u01/ORCL/ora01.dbf 31316499 ONLINE NO
/u01/ORCL/xmldbts01.dbf 31316499 ONLINE NO
/u01/ORCL/test1.dbf 31316499 ONLINE NO
/u01/ORCL/test2.dbf 31316646 ONLINE NO

8 rows selected.

Now I will put entire tablespace in offline mode, tablespace offline will perform checkpoint on the respective datafiles(check last_change# and checkpoint_change#) , so no recovery needed when you bring the tablespace online.

SYS@ORCL>alter tablespace test offline;

Tablespace altered.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316499 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316499 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316499 ONLINE
/u01/ORCL/tools01.dbf 31316499 ONLINE
/u01/ORCL/ora01.dbf 31316499 ONLINE
/u01/ORCL/xmldbts01.dbf 31316499 ONLINE
/u01/ORCL/test1.dbf 31316705 31316705 OFFLINE
/u01/ORCL/test2.dbf 31316705 31316705 OFFLINE

8 rows selected.

SYS@ORCL>alter tablespace test online;

Tablespace altered.

Same is the case when tablespace is kept in read only mode.

SYS@ORCL>alter tablespace test read only;

Tablespace altered.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316499 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316499 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316499 ONLINE
/u01/ORCL/tools01.dbf 31316499 ONLINE
/u01/ORCL/ora01.dbf 31316499 ONLINE
/u01/ORCL/xmldbts01.dbf 31316499 ONLINE
/u01/ORCL/test1.dbf 31316936 31316936 ONLINE
/u01/ORCL/test2.dbf 31316936 31316936 ONLINE

8 rows selected.

SYS@ORCL>alter tablespace test read write;

Tablespace altered.

To summarize

  1. Datafile offline drop will not remove the datafile from dictionary/controlfile, it will only mark in controlfile that scn comparison isn’t needed when starting DB.
  2. Datafile offline will not perform checkpoint, so it needs recovery to bring it back online. We only need the sequence# which has the redo information until last_change# to bring the datafile online.
  3. Tablespace offline or read only mode will perform the checkpoint and so no recovery is needed to bring the tablespace online/read write mode.

 

Null is Not Null – Filter

While checking something got through below old OTN question, so wanted to verify on optimizer behaviour

https://community.oracle.com/thread/2406366

TEST@XCONTEST>create table test as select RPAD(TO_CHAR(ROWNUM),3,'X') a,rownum b from dual CONNECT BY LEVEL<=1000; Table created. TEST@XCONTEST>create index test_idx on test (a,b);

Index created.

TEST@XCONTEST>EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'TEST',CASCADE=>TRUE,ESTIMATE_PERCENT=>100);

PL/SQL procedure successfully completed.

Check for rows with a=”

TEST@XCONTEST>select * from test where a='';

no rows selected

TEST@XCONTEST>SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

PLAN_TABLE_OUTPUT
 ----------------------------------------------------------------------------------------------------------------------------------
 SQL_ID 1f8m76xwz6mpv, child number 0
 -------------------------------------
 select * from test where a=''

Plan hash value: 766971379

---------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | | | 1 (100)| |
 |* 1 | FILTER | | | | | |
 | 2 | TABLE ACCESS FULL| TEST | 1000 | 8000 | 2 (0)| 00:00:01 |
 ---------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - filter(NULL IS NOT NULL)

Optimizer will add filter as null is not null when sub plan tree is false, the sub plan tree will never be executed, that is the reason the cost of parent operation is 0 when the child has 2.

What was the reason for not picking the index and the plan to have the specific filter?

A ZERO length varchar2 is treated as NULL (Thanks to Tom Kyte), so in above query we are searching for null value and nothing equates to null,so the child operation at 2 doesn’t need to be executed at all. Index wasn’t used here as the column a isn’t defined as not null.

TEST@XCONTEST>alter table test modify a not null;

Table altered.

TEST@XCONTEST>select * from test where a=”;

no rows selected

TEST@XCONTEST>SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

PLAN_TABLE_OUTPUT
 ----------------------------------------------------------------------------------------------------
 SQL_ID bzgnr9bg1z2ps, child number 0
 -------------------------------------
 select * from test where a=''

Plan hash value: 1731493281

-----------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 -----------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | | | 1 (100)| |
 |* 1 | FILTER | | | | | |
 | 2 | INDEX FULL SCAN| TEST_IDX | 1000 | 8000 | 1 (0)| 00:00:01 |
 -----------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - filter(NULL IS NOT NULL)
 19 rows selected.

DBMS_METADATA_DDL and Interval partitions

I am writing this post after a OTN question on DBMS_METADATA not showing the DDL after interval partitions are created and Jonathan lewis reason for this behavior.

create table test3
 (
 id number,
 created_dt date
 )
 partition by range (created_dt)
 interval (numtoyminterval(1, 'month'))
 (
 partition p_feb2017 values less than (to_date('2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')) ,
 partition p_mar2017 values less than (to_date('2017-04-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian'))
 ) ;
 SQL> select partition_name,partition_position,high_value from user_tab_partitions where table_name ='TEST3' order by partition_position;

PARTITION_NAME PARTITION_POSITION HIGH_VALUE
 -------------------- ------------------ --------------------------------------------------------------------------------
 P_FEB2017 1 TO_DATE(' 2017-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
 P_MAR2017 2 TO_DATE(' 2017-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

Check the DBMS_METADATA output for the table.

SQL> select dbms_metadata.get_ddl('TABLE','TEST3','RAKESH') from dual;
 DBMS_METADATA.GET_DDL('TABLE','TEST3','RAKESH')
 --------------------------------------------------------------------------

CREATE TABLE "RAKESH"."TEST3"
 ( "ID" NUMBER,
 "CREATED_DT" DATE
 ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS"
 PARTITION BY RANGE ("CREATED_DT") INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
  (PARTITION "P_FEB2017" VALUES LESS THAN (TO_DATE(' 2017-03-01 00:00:00', 'SYYY
 Y-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
 PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS" ,
 PARTITION "P_MAR2017" VALUES LESS THAN (TO_DATE(' 2017-04-01 00:00:00', 'SYYYY
 -MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
 PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS" )

Insert new data into the table.

SQL> insert into test3 values (1,to_date('8-MAY-2017', 'DD-MON-YYYY'));

1 row created.

SQL> commit;

Commit complete.

Interval partition created.

SQL> select partition_name,partition_position,high_value from user_tab_partitions where table_name ='TEST3' order by partition_position;

PARTITION_NAME PARTITION_POSITION HIGH_VALUE
 -------------------- ------------------ --------------------------------------------------------------------------------
 P_FEB2017 1 TO_DATE(' 2017-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
 N')

P_MAR2017 2 TO_DATE(' 2017-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
 N')

SYS_P911 3 TO_DATE(' 2017-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
  N')

But the DDL from DBMS_METADATA still shows the partitions that were created when the table first created, doesn’t include the new partition created for MAY.

CREATE TABLE "RAKESH"."TEST3"
 ( "ID" NUMBER,
 "CREATED_DT" DATE
 ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS"
 PARTITION BY RANGE ("CREATED_DT") INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
 (PARTITION "P_FEB2017" VALUES LESS THAN (TO_DATE(' 2017-03-01 00:00:00', 'SYYY
 Y-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
 PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS" ,
 PARTITION "P_MAR2017" VALUES LESS THAN (TO_DATE(' 2017-04-01 00:00:00', 'SYYYY
 -MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
 PCT FREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS" )

Thanks to Jonathan Lewis , who put a comment on rationale behind DBMS_METADATA not showing the interval partition. I am quoting his words.

“There is a simple rationale for this behaviour.

Imagine the only out-of-range data you’ve added is for December – Oracle will add the december partition but will not add the partitions for april through to november.
If dbms_metadata.get_ddl now created the DDL to create partitions with the three boundaries that existed the code would create three RANGE partitions, which means the december partition would cover april to december and act as the lower bound for the interval partitions, and you wouldn’t get (e.g.) a dedicated june partition created if you then inserted some data for June”

 

 

Oracle 12c Internal Users

DBA_USERS in oracle 12c included a new column ORACLE_MAINTAINED which will show if the user is oracle maintained or No.

SQL> desc dba_users
Name Null? Type
———————————————– ——–
USERNAME NOT NULL VARCHAR2(128)
USER_ID NOT NULL NUMBER
PASSWORD VARCHAR2(4000)
ACCOUNT_STATUS NOT NULL VARCHAR2(32)
LOCK_DATE DATE
EXPIRY_DATE DATE
DEFAULT_TABLESPACE NOT NULL VARCHAR2(30)
TEMPORARY_TABLESPACE NOT NULL VARCHAR2(30)
CREATED NOT NULL DATE
PROFILE NOT NULL VARCHAR2(128)
INITIAL_RSRC_CONSUMER_GROUP VARCHAR2(128)
EXTERNAL_NAME VARCHAR2(4000)
PASSWORD_VERSIONS VARCHAR2(12)
EDITIONS_ENABLED VARCHAR2(1)
AUTHENTICATION_TYPE VARCHAR2(8)
PROXY_ONLY_CONNECT VARCHAR2(1)
COMMON VARCHAR2(3)
LAST_LOGIN TIMESTAMP(9) WITH TIME ZONE
ORACLE_MAINTAINED VARCHAR2(1)

SQL> select username,oracle_maintained,account_status from dba_users where oracle_maintained=’Y’ ;

USERNAME O ACCOUNT_STATUS
——————– – ——————————–
GSMCATUSER Y EXPIRED & LOCKED
SYSBACKUP Y EXPIRED & LOCKED
OUTLN Y OPEN
SYSDG Y EXPIRED & LOCKED
DBSNMP Y OPEN
ORACLE_OCM Y EXPIRED & LOCKED
SYSKM Y EXPIRED & LOCKED
XS$NULL Y EXPIRED & LOCKED
GSMUSER Y EXPIRED & LOCKED
AUDSYS Y EXPIRED & LOCKED
SYS Y OPEN
SYSTEM Y OPEN
DIP Y EXPIRED & LOCKED
ANONYMOUS Y LOCKED
XDB Y EXPIRED & LOCKED
APPQOSSYS Y EXPIRED & LOCKED
GSMADMIN_INTERNAL Y EXPIRED & LOCKED

17 rows selected.

Convert Admin managed to Policy Managed Database in RAC

I haven’t come across Policy Managed database in RAC till now, so testing out the coversion of Admin managed to PMD and viceversa.

Convert Admin Managed to Policy Managed Database:

Current Config Status of database

$ srvctl config database -d TESTCDB
Database unique name: TESTCDB
Database name: TESTCDB
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA_GRP_1/TESTCDB/PARAMETERFILE/spfile.315.928144379
Password file: +DATA_GRP_1/TESTCDB/PASSWORD/pwdtestcdb.300.928143681
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: REDO_GRP_2,DATA_GRP_1
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: TESTCDB1,TESTCDB2
Configured nodes: VM1,VM2
Database is administrator managed

Server pool List:

$ srvctl status srvpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 2

Add new Server pool TEST

$ srvctl add srvpool -serverpool TEST -min 0 -max 2
$ srvctl status srvpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 2
Server pool name: TEST
Active servers count: 0

Modify TESTCDB to server pool TEST.

$ srvctl modify database -d TESTCDB -g TEST -force

$ srvctl config database -d TESTCDB
Database unique name: TESTCDB
Database name: TESTCDB
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA_GRP_1/TESTCDB/PARAMETERFILE/spfile.315.928144379
Password file: +DATA_GRP_1/TESTCDB/PASSWORD/pwdtestcdb.300.928143681
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: TEST
Disk Groups: REDO_GRP_2,DATA_GRP_1
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances:
Configured nodes:
Database is policy managed

Convert Policy Managed Database to Admin Managed Database:

To convert the policy managed to admin managed, you have to remove it from cluster registry. To gracefully remove,stop the database and then remove. As it is test instance I am using “-Force” option.

$ srvctl remove database -d TESTCDB
PRKO-3141 : Database TESTCDB could not be removed because it was running
$ srvctl remove database -d TESTCDB -force

$ srvctl status database -d TESTCDB
PRCD-1120 : The resource for database TESTCDB could not be found.
PRCR-1001 : Resource ora.testcdb.db does not exist

Add database and instances to cluster again.

$ srvctl add database -d TESTCDB -o /u01/app/oracle/product/12.1.0.2/db_1 -y AUTOMATIC
$ srvctl config database -d TESTCDB
Database unique name: TESTCDB
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile:
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances:
Configured nodes:
Database is administrator managed

Add Instances , spfile and password file to the registry

$ srvctl add instance -d TESTCDB -i TESTCDB1 -n VM1
$ srvctl add instance -d TESTCDB -i TESTCDB2 -n VM2
$ srvctl modify database -d TESTCDB -p +DATA_GRP_1/TESTCDB/PARAMETERFILE/spfile.315.928144379
$ srvctl modify database -d TESTCDB -pwfile +DATA_GRP_1/TESTCDB/PASSWORD/pwdtestcdb.300.928143681
$ srvctl config database -d TESTCDB
Database unique name: TESTCDB
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA_GRP_1/TESTCDB/PARAMETERFILE/spfile.315.928144379
Password file: +DATA_GRP_1/TESTCDB/PASSWORD/pwdtestcdb.300.928143681
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: TESTCDB1,TESTCDB2
Configured nodes: VM1,VM2
Database is administrator managed

Ref: https://docs.oracle.com/database/121/RACAD/admcon.htm#RACAD7282