Alter Database Datafile Offline Drop And Alter Tablespace Offline

https://community.oracle.com/thread/4067842

Recent question on oracle community on user dropping a datafile using offline drop wanted to recall what happens during datafile offline ,tablespace offline and tablespace read only modes.

To start with  concepts of dropping, oracle do not allow dropping of a datafile in a tablespace, since the extents of segments in tablespace will be spread across datafiles ,so it is expected not to allow datafile drop.

If you want to exclude datafile  for any reason, in noarchivelog mode you have to use offline drop clause, the datafile will continue to exist in dictionary and controlfile , it just marks in controfile that this datafile not needed to verify SCN during DB startup.

In archivelog mode, oracle ignores drop , it will be similar to keeping datafile offline.

Now the testing of checkpoint_change# when datafile and tablespace are taken offline/readonly.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31312696 SYSTEM
/u01/ORCL/SYSAUX.dbf 31312696 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31312696 ONLINE
/u01/ORCL/tools01.dbf 31312696 ONLINE
/u01/ORCL/ora01.dbf 31312696 ONLINE
/u01/ORCL/xmldbts01.dbf 31312696 ONLINE
/u01/ORCL/test1.dbf 31315480 ONLINE
/u01/ORCL/test2.dbf 31315508 ONLINE

8 rows selected.

SYS@ORCL>alter system checkpoint;

System altered.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316163 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316163 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316163 ONLINE
/u01/ORCL/tools01.dbf 31316163 ONLINE
/u01/ORCL/ora01.dbf 31316163 ONLINE
/u01/ORCL/xmldbts01.dbf 31316163 ONLINE
/u01/ORCL/test1.dbf 31316163 ONLINE
/u01/ORCL/test2.dbf 31316163 ONLINE

8 rows selected.

 

Now all datafiles have same checkpoint SCN.

I will take one datafile from tablespace TEST offline, here as DB in archivelog mode, even offline drop will behave same.

SYS@ORCL>alter database datafile '/u01/ORCL/test2.dbf' offline drop;

Database altered.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316163 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316163 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316163 ONLINE
/u01/ORCL/tools01.dbf 31316163 ONLINE
/u01/ORCL/ora01.dbf 31316163 ONLINE
/u01/ORCL/xmldbts01.dbf 31316163 ONLINE
/u01/ORCL/test1.dbf 31316163 ONLINE
/u01/ORCL/test2.dbf 31316163 31316201 RECOVER

8 rows selected.

v$datafile_header will show the status, FYI v$datafile_header as the name of dynamic view suggests, it actually what is there in actual datafile header,v$datafile is
controlfile information.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,STATUS,RECOVER from v$datafile_header;

NAME CHECKPOINT_CHANGE# STATUS REC
-------------------------------------------------- ------------------ ------- ---
/u01/ORCL/system01.dbf 31316163 ONLINE NO
/u01/ORCL/SYSAUX.dbf 31316163 ONLINE NO
/u01/ORCL/UNDOTBS1.dbf 31316163 ONLINE NO
/u01/ORCL/tools01.dbf 31316163 ONLINE NO
/u01/ORCL/ora01.dbf 31316163 ONLINE NO
/u01/ORCL/xmldbts01.dbf 31316163 ONLINE NO
/u01/ORCL/test1.dbf 31316163 ONLINE NO
/u01/ORCL/test2.dbf 31316163 OFFLINE YES

8 rows selected.

Datafile offline will not perform any file/tabespace checkpoint.

v$datafile.last_change# is stop scn , it is the SCN number up to which there is redo information available for a datafile as per controlfile.

In order to bring the datafile back online, we need to recover it , but till what point?
We only need the redologs which have the checkpoint information of offlined datafile , we do not need any archivelogs after that.

In above case we only need the redolog which has the redo information at SCN 31316201 which is in sequence# 642.

Even if we lose the other redologs , we can bring the datifile online.

SYS@ORCL>select sequence#, FIRST_CHANGE#,to_Char(NEXT_CHANGE#) from v$archived_log;

SEQUENCE# FIRST_CHANGE# TO_CHAR(NEXT_CHANGE#)
---------- ------------- ----------------------------------------
 641 31303644 31312696
 642 31312696 31316487
 643 31316487 31316490
 644 31316490 31316493
 645 31316493 31316496
 646 31316496 31316499

I am only recovering until last_change# scn and bring the datafile online.

RMAN> recover datafile 8 until scn 31316202;

Starting recover at 05-AUG-17
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=87 device type=DISK

starting media recovery

archived log for thread 1 with sequence 642 is already on disk as file 
/u01/ORCL/flash_recovery_area/ORCL/archivelog/2017_08_05/o1_mf_1_642_drcl59k8_.arc
archived log file name=/u01/ORCL/flash_recovery_area/ORCL/archivelog/2017_08_05/o1_mf_1_642_drcl59k8_.arc thread=1 sequence=642
media recovery complete, elapsed time: 00:00:00
Finished recover at 05-AUG-17

 

SYS@ORCL>alter database datafile '/u01/ORCL/test2.dbf' online;

Database altered.

Last_change# will be set to null once it is back online.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316499 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316499 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316499 ONLINE
/u01/ORCL/tools01.dbf 31316499 ONLINE
/u01/ORCL/ora01.dbf 31316499 ONLINE
/u01/ORCL/xmldbts01.dbf 31316499 ONLINE
/u01/ORCL/test1.dbf 31316499 ONLINE
/u01/ORCL/test2.dbf 31316646 ONLINE

8 rows selected.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,STATUS,RECOVER from v$datafile_header;

NAME CHECKPOINT_CHANGE# STATUS REC
-------------------------------------------------- ------------------ ------- ---
/u01/ORCL/system01.dbf 31316499 ONLINE NO
/u01/ORCL/SYSAUX.dbf 31316499 ONLINE NO
/u01/ORCL/UNDOTBS1.dbf 31316499 ONLINE NO
/u01/ORCL/tools01.dbf 31316499 ONLINE NO
/u01/ORCL/ora01.dbf 31316499 ONLINE NO
/u01/ORCL/xmldbts01.dbf 31316499 ONLINE NO
/u01/ORCL/test1.dbf 31316499 ONLINE NO
/u01/ORCL/test2.dbf 31316646 ONLINE NO

8 rows selected.

Now I will put entire tablespace in offline mode, tablespace offline will perform checkpoint on the respective datafiles(check last_change# and checkpoint_change#) , so no recovery needed when you bring the tablespace online.

SYS@ORCL>alter tablespace test offline;

Tablespace altered.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316499 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316499 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316499 ONLINE
/u01/ORCL/tools01.dbf 31316499 ONLINE
/u01/ORCL/ora01.dbf 31316499 ONLINE
/u01/ORCL/xmldbts01.dbf 31316499 ONLINE
/u01/ORCL/test1.dbf 31316705 31316705 OFFLINE
/u01/ORCL/test2.dbf 31316705 31316705 OFFLINE

8 rows selected.

SYS@ORCL>alter tablespace test online;

Tablespace altered.

Same is the case when tablespace is kept in read only mode.

SYS@ORCL>alter tablespace test read only;

Tablespace altered.

SYS@ORCL>select name,CHECKPOINT_CHANGE#,LAST_CHANGE#,status from v$datafile;

NAME CHECKPOINT_CHANGE# LAST_CHANGE# STATUS
-------------------------------------------------- ------------------ ------------ -------
/u01/ORCL/system01.dbf 31316499 SYSTEM
/u01/ORCL/SYSAUX.dbf 31316499 ONLINE
/u01/ORCL/UNDOTBS1.dbf 31316499 ONLINE
/u01/ORCL/tools01.dbf 31316499 ONLINE
/u01/ORCL/ora01.dbf 31316499 ONLINE
/u01/ORCL/xmldbts01.dbf 31316499 ONLINE
/u01/ORCL/test1.dbf 31316936 31316936 ONLINE
/u01/ORCL/test2.dbf 31316936 31316936 ONLINE

8 rows selected.

SYS@ORCL>alter tablespace test read write;

Tablespace altered.

To summarize

  1. Datafile offline drop will not remove the datafile from dictionary/controlfile, it will only mark in controlfile that scn comparison isn’t needed when starting DB.
  2. Datafile offline will not perform checkpoint, so it needs recovery to bring it back online. We only need the sequence# which has the redo information until last_change# to bring the datafile online.
  3. Tablespace offline or read only mode will perform the checkpoint and so no recovery is needed to bring the tablespace online/read write mode.

 

Null is Not Null – Filter

While checking something got through below old OTN question, so wanted to verify on optimizer behaviour

https://community.oracle.com/thread/2406366

TEST@XCONTEST>create table test as select RPAD(TO_CHAR(ROWNUM),3,'X') a,rownum b from dual CONNECT BY LEVEL<=1000; Table created. TEST@XCONTEST>create index test_idx on test (a,b);

Index created.

TEST@XCONTEST>EXEC DBMS_STATS.GATHER_TABLE_STATS(OWNNAME=>USER,TABNAME=>'TEST',CASCADE=>TRUE,ESTIMATE_PERCENT=>100);

PL/SQL procedure successfully completed.

Check for rows with a=”

TEST@XCONTEST>select * from test where a='';

no rows selected

TEST@XCONTEST>SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

PLAN_TABLE_OUTPUT
 ----------------------------------------------------------------------------------------------------------------------------------
 SQL_ID 1f8m76xwz6mpv, child number 0
 -------------------------------------
 select * from test where a=''

Plan hash value: 766971379

---------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 ---------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | | | 1 (100)| |
 |* 1 | FILTER | | | | | |
 | 2 | TABLE ACCESS FULL| TEST | 1000 | 8000 | 2 (0)| 00:00:01 |
 ---------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - filter(NULL IS NOT NULL)

Optimizer will add filter as null is not null when sub plan tree is false, the sub plan tree will never be executed, that is the reason the cost of parent operation is 0 when the child has 2.

What was the reason for not picking the index and the plan to have the specific filter?

A ZERO length varchar2 is treated as NULL (Thanks to Tom Kyte), so in above query we are searching for null value and nothing equates to null,so the child operation at 2 doesn’t need to be executed at all. Index wasn’t used here as the column a isn’t defined as not null.

TEST@XCONTEST>alter table test modify a not null;

Table altered.

TEST@XCONTEST>select * from test where a=”;

no rows selected

TEST@XCONTEST>SELECT * FROM TABLE(DBMS_XPLAN.DISPLAY_CURSOR(NULL,NULL,'TYPICAL'));

PLAN_TABLE_OUTPUT
 ----------------------------------------------------------------------------------------------------
 SQL_ID bzgnr9bg1z2ps, child number 0
 -------------------------------------
 select * from test where a=''

Plan hash value: 1731493281

-----------------------------------------------------------------------------
 | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
 -----------------------------------------------------------------------------
 | 0 | SELECT STATEMENT | | | | 1 (100)| |
 |* 1 | FILTER | | | | | |
 | 2 | INDEX FULL SCAN| TEST_IDX | 1000 | 8000 | 1 (0)| 00:00:01 |
 -----------------------------------------------------------------------------

Predicate Information (identified by operation id):
 ---------------------------------------------------

1 - filter(NULL IS NOT NULL)
 19 rows selected.

DBMS_METADATA_DDL and Interval partitions

I am writing this post after a OTN question on DBMS_METADATA not showing the DDL after interval partitions are created and Jonathan lewis reason for this behavior.

create table test3
 (
 id number,
 created_dt date
 )
 partition by range (created_dt)
 interval (numtoyminterval(1, 'month'))
 (
 partition p_feb2017 values less than (to_date('2017-03-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian')) ,
 partition p_mar2017 values less than (to_date('2017-04-01 00:00:00', 'syyyy-mm-dd hh24:mi:ss', 'nls_calendar=gregorian'))
 ) ;
 SQL> select partition_name,partition_position,high_value from user_tab_partitions where table_name ='TEST3' order by partition_position;

PARTITION_NAME PARTITION_POSITION HIGH_VALUE
 -------------------- ------------------ --------------------------------------------------------------------------------
 P_FEB2017 1 TO_DATE(' 2017-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
 P_MAR2017 2 TO_DATE(' 2017-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA

Check the DBMS_METADATA output for the table.

SQL> select dbms_metadata.get_ddl('TABLE','TEST3','RAKESH') from dual;
 DBMS_METADATA.GET_DDL('TABLE','TEST3','RAKESH')
 --------------------------------------------------------------------------

CREATE TABLE "RAKESH"."TEST3"
 ( "ID" NUMBER,
 "CREATED_DT" DATE
 ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS"
 PARTITION BY RANGE ("CREATED_DT") INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
  (PARTITION "P_FEB2017" VALUES LESS THAN (TO_DATE(' 2017-03-01 00:00:00', 'SYYY
 Y-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
 PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS" ,
 PARTITION "P_MAR2017" VALUES LESS THAN (TO_DATE(' 2017-04-01 00:00:00', 'SYYYY
 -MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
 PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS" )

Insert new data into the table.

SQL> insert into test3 values (1,to_date('8-MAY-2017', 'DD-MON-YYYY'));

1 row created.

SQL> commit;

Commit complete.

Interval partition created.

SQL> select partition_name,partition_position,high_value from user_tab_partitions where table_name ='TEST3' order by partition_position;

PARTITION_NAME PARTITION_POSITION HIGH_VALUE
 -------------------- ------------------ --------------------------------------------------------------------------------
 P_FEB2017 1 TO_DATE(' 2017-03-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
 N')

P_MAR2017 2 TO_DATE(' 2017-04-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
 N')

SYS_P911 3 TO_DATE(' 2017-06-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA
  N')

But the DDL from DBMS_METADATA still shows the partitions that were created when the table first created, doesn’t include the new partition created for MAY.

CREATE TABLE "RAKESH"."TEST3"
 ( "ID" NUMBER,
 "CREATED_DT" DATE
 ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS"
 PARTITION BY RANGE ("CREATED_DT") INTERVAL (NUMTOYMINTERVAL(1, 'MONTH'))
 (PARTITION "P_FEB2017" VALUES LESS THAN (TO_DATE(' 2017-03-01 00:00:00', 'SYYY
 Y-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
 PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS" ,
 PARTITION "P_MAR2017" VALUES LESS THAN (TO_DATE(' 2017-04-01 00:00:00', 'SYYYY
 -MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIAN')) SEGMENT CREATION DEFERRED
 PCT FREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255
 NOCOMPRESS LOGGING
 STORAGE(
 BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
 TABLESPACE "USERS" )

Thanks to Jonathan Lewis , who put a comment on rationale behind DBMS_METADATA not showing the interval partition. I am quoting his words.

“There is a simple rationale for this behaviour.

Imagine the only out-of-range data you’ve added is for December – Oracle will add the december partition but will not add the partitions for april through to november.
If dbms_metadata.get_ddl now created the DDL to create partitions with the three boundaries that existed the code would create three RANGE partitions, which means the december partition would cover april to december and act as the lower bound for the interval partitions, and you wouldn’t get (e.g.) a dedicated june partition created if you then inserted some data for June”

 

 

Oracle 12c Internal Users

DBA_USERS in oracle 12c included a new column ORACLE_MAINTAINED which will show if the user is oracle maintained or No.

SQL> desc dba_users
Name Null? Type
———————————————– ——–
USERNAME NOT NULL VARCHAR2(128)
USER_ID NOT NULL NUMBER
PASSWORD VARCHAR2(4000)
ACCOUNT_STATUS NOT NULL VARCHAR2(32)
LOCK_DATE DATE
EXPIRY_DATE DATE
DEFAULT_TABLESPACE NOT NULL VARCHAR2(30)
TEMPORARY_TABLESPACE NOT NULL VARCHAR2(30)
CREATED NOT NULL DATE
PROFILE NOT NULL VARCHAR2(128)
INITIAL_RSRC_CONSUMER_GROUP VARCHAR2(128)
EXTERNAL_NAME VARCHAR2(4000)
PASSWORD_VERSIONS VARCHAR2(12)
EDITIONS_ENABLED VARCHAR2(1)
AUTHENTICATION_TYPE VARCHAR2(8)
PROXY_ONLY_CONNECT VARCHAR2(1)
COMMON VARCHAR2(3)
LAST_LOGIN TIMESTAMP(9) WITH TIME ZONE
ORACLE_MAINTAINED VARCHAR2(1)

SQL> select username,oracle_maintained,account_status from dba_users where oracle_maintained=’Y’ ;

USERNAME O ACCOUNT_STATUS
——————– – ——————————–
GSMCATUSER Y EXPIRED & LOCKED
SYSBACKUP Y EXPIRED & LOCKED
OUTLN Y OPEN
SYSDG Y EXPIRED & LOCKED
DBSNMP Y OPEN
ORACLE_OCM Y EXPIRED & LOCKED
SYSKM Y EXPIRED & LOCKED
XS$NULL Y EXPIRED & LOCKED
GSMUSER Y EXPIRED & LOCKED
AUDSYS Y EXPIRED & LOCKED
SYS Y OPEN
SYSTEM Y OPEN
DIP Y EXPIRED & LOCKED
ANONYMOUS Y LOCKED
XDB Y EXPIRED & LOCKED
APPQOSSYS Y EXPIRED & LOCKED
GSMADMIN_INTERNAL Y EXPIRED & LOCKED

17 rows selected.

Convert Admin managed to Policy Managed Database in RAC

I haven’t come across Policy Managed database in RAC till now, so testing out the coversion of Admin managed to PMD and viceversa.

Convert Admin Managed to Policy Managed Database:

Current Config Status of database

$ srvctl config database -d TESTCDB
Database unique name: TESTCDB
Database name: TESTCDB
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA_GRP_1/TESTCDB/PARAMETERFILE/spfile.315.928144379
Password file: +DATA_GRP_1/TESTCDB/PASSWORD/pwdtestcdb.300.928143681
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups: REDO_GRP_2,DATA_GRP_1
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: TESTCDB1,TESTCDB2
Configured nodes: VM1,VM2
Database is administrator managed

Server pool List:

$ srvctl status srvpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 2

Add new Server pool TEST

$ srvctl add srvpool -serverpool TEST -min 0 -max 2
$ srvctl status srvpool
Server pool name: Free
Active servers count: 0
Server pool name: Generic
Active servers count: 2
Server pool name: TEST
Active servers count: 0

Modify TESTCDB to server pool TEST.

$ srvctl modify database -d TESTCDB -g TEST -force

$ srvctl config database -d TESTCDB
Database unique name: TESTCDB
Database name: TESTCDB
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA_GRP_1/TESTCDB/PARAMETERFILE/spfile.315.928144379
Password file: +DATA_GRP_1/TESTCDB/PASSWORD/pwdtestcdb.300.928143681
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: TEST
Disk Groups: REDO_GRP_2,DATA_GRP_1
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances:
Configured nodes:
Database is policy managed

Convert Policy Managed Database to Admin Managed Database:

To convert the policy managed to admin managed, you have to remove it from cluster registry. To gracefully remove,stop the database and then remove. As it is test instance I am using “-Force” option.

$ srvctl remove database -d TESTCDB
PRKO-3141 : Database TESTCDB could not be removed because it was running
$ srvctl remove database -d TESTCDB -force

$ srvctl status database -d TESTCDB
PRCD-1120 : The resource for database TESTCDB could not be found.
PRCR-1001 : Resource ora.testcdb.db does not exist

Add database and instances to cluster again.

$ srvctl add database -d TESTCDB -o /u01/app/oracle/product/12.1.0.2/db_1 -y AUTOMATIC
$ srvctl config database -d TESTCDB
Database unique name: TESTCDB
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile:
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances:
Configured nodes:
Database is administrator managed

Add Instances , spfile and password file to the registry

$ srvctl add instance -d TESTCDB -i TESTCDB1 -n VM1
$ srvctl add instance -d TESTCDB -i TESTCDB2 -n VM2
$ srvctl modify database -d TESTCDB -p +DATA_GRP_1/TESTCDB/PARAMETERFILE/spfile.315.928144379
$ srvctl modify database -d TESTCDB -pwfile +DATA_GRP_1/TESTCDB/PASSWORD/pwdtestcdb.300.928143681
$ srvctl config database -d TESTCDB
Database unique name: TESTCDB
Database name:
Oracle home: /u01/app/oracle/product/12.1.0.2/db_1
Oracle user: oracle
Spfile: +DATA_GRP_1/TESTCDB/PARAMETERFILE/spfile.315.928144379
Password file: +DATA_GRP_1/TESTCDB/PASSWORD/pwdtestcdb.300.928143681
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools:
Disk Groups:
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
OSDBA group: dba
OSOPER group: dba
Database instances: TESTCDB1,TESTCDB2
Configured nodes: VM1,VM2
Database is administrator managed

Ref: https://docs.oracle.com/database/121/RACAD/admcon.htm#RACAD7282

Automatic Memory Management in Oracle 11g

Below to post is to verify my understanding on how shared memory is managed in AMM. Thanks to Tanel poder Post on AMM internals.

Current setting for MEMORY_TARGET parameters in DB, not all 50GB allocated.

SQL> sho parameter memory

NAME TYPE VALUE
 ------------------------------------ -------------------------------- -----
 hi_shared_memory_address integer 0
 memory_max_target big integer 50G
 memory_target big integer 50G
 shared_memory_address integer 0

SGA parameters set in DB

SQL> sho parameter sga

NAME TYPE VALUE
 -------------------------
 lock_sga boolean FALSE
 pre_page_sga boolean FALSE
 sga_max_size big integer 50G
 sga_target big integer 20G

In AMM , oracle manages shared memory as files in shared memory filesystem (/dev/shm)

cd /dev/shm

$ ls -ltr *TEST* |more
 -rw-r----- 1 oracle dba 134217728 Apr 26 18:42 ora_TEST_131235851_0
 -rw-r----- 1 oracle dba 134217728 Apr 26 18:42 ora_TEST_131203082_369
 Each files is of size 128M

ls -ltr *TEST* | awk '$5==134217728' |wc -l
 170

170*128M=21.25 GB

21.25GB shared memory currently allocated to database TEST,Initially it will allocate (verify in /dev/shm for the files) the memory set for sga_target , if not set will be set based on memory_target value.

Other files with 0 bytes are (MEMORY_TARGET- SGA_TARGET) ,these are memory files mapped to server process (size 0 bytes), but its not used, if space needed for pga , these files are freed and memory allocated to pga.

$ ls -ltr *TEST* | awk '$5==0' |more
 -rw-r----- 1 oracle dba 0 Apr 26 18:42 ora_TEST_131203082_230
 -rw-r----- 1 oracle dba 0 Apr 26 18:42 ora_TEST_131203082_229
 -rw-r----- 1 oracle dba 0 Apr 26 18:42 ora_TEST_131203082_228
 -rw-r----- 1 oracle dba 0 Apr 26 18:42 ora_TEST_131203082_227
 -rw-r----- 1 oracle dba 0 Apr 26 18:42 ora_TEST_131203082_226
 -rw-r----- 1 oracle dba 0 Apr 26 18:42 ora_TEST_131203082_225

No of empty files which can freed whenever memory needs to be allocated to pga.

$ ls -ltr *TEST* | awk '$5==0' |wc -l
 231

231*128M=28.8GB  ( MEMORY_TARGET-Current SGA Size)

COMPONENT CURRENT_SIZE MIN_SIZE MAX_SIZE USER_SPECIFIED_SIZE OPER_COUNT LAST_OPER_TYP LAST_OPER LAST_OPER GRANULE_SIZE
 -------------------- ------------ ---------- ---------- ------------------- ---------- ------------- --------- --------- ------------
 SGA Target 2.2683E+10 2.2012E+10 2.2683E+10 2.1475E+10 5 GROW DEFERRED 02-MAY-16 134217728
 
SQL> select to_char(2.2683E+10)/1024/1024/1024 from dual;

TO_CHAR(22683000000)/1024/1024/1024
 -----------------------------------
 21.12519

sga_target current size is 21Gb which matches to the size of all shared memory files created for the DB in /dev/shm.
If /dev/shm not defined or properly configured, then setting AMM at instance will throw error message ORA-00845: MEMORY_TARGET not supported on this system.

ASMM can be configured without /dev/shm.

Summary:

During instance startup , oracle will check if /dev/shm has free space left for the value specified in Memory_max_target(though not all is initialized/allocated) otherwise we get ORA-00845.

If space is sufficient then the virtual address space (in /dev/shm) is mapped to the value of memory_target, but only value sga_target + small for pga is allocated (check /dev/shm files with non zero bytes size) is initialized, rest of the files are mapped but will have 0 bytes. Due to this we can see /dev/shm has more free space than (total tmps – memory_target) value.

If instance started (by the value in memory_target) initially ,later if sga_target grows and no space left on /dev/shm to accommodate until memory_target , then the instance terminates and have core dump file with below exceptions.

Exception [type: SIGBUS, Non-existent physical address] [ADDR:0x1EDA76000] [PC:0x6080DD4, __intel_memset()+2660] [flags: 0x0, count: 1] 

 

 

 

Ref: http://blog.tanelpoder.com/2007/08/21/oracle-11g-internals-part-1-automatic-memory-management/