Wednesday, February 22, 2012

oracle alter session set current_schema


Today I was given a note pad file containing DDL &DML script and they told me to run as a DEMO user in my Dev Environment.

Here I don’t have Winscp(to copy file from windows to my oracle database in Linux) and I don’t have the password for the DEMO user so in these kind of situations I ran the script as a Demo user by setting the Current_schema=DEMO.

First I copied the script from note pad file and pasted in Vi editor and named as index.sql.
Because instead of putting the schema name in front of the index name in the entire file it’s better to change the current_schema.
Then,

sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 22 16:53:27 2012

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning and Automatic Storage Management options

SQL> show user
USER is "SYS"

SQL> alter session set current_schema=DEMO;
Session altered.

SQL> show user
USER is "SYS"

Still it is SYS user…………

SQL> select sys_context('USERENV','SESSION_USER') current_user,sys_context('USERENV','SESSION_SCHEMA') current_schema from dual;

CURRENT_USER
--------------------------------------------------------------------------------
CURRENT_SCHEMA
--------------------------------------------------------------------------------
SYS
DEMO

Ran the script which is copied from SQL prompt

SQL> @/opt/oracle/index.sql

You can perform any operation as if you logged in as DEMO
After finish doing the job then set back to the sys user again using the same command 


SQL> alter session set current_schema=SYS; 


SQL> select sys_context('USERENV','SESSION_USER') current_user,sys_context('USERENV','SESSION_SCHEMA') current_schema from dual;

CURRENT_USER
--------------------------------------------------------------------------------
CURRENT_SCHEMA
--------------------------------------------------------------------------------
SYS
SYS

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning and Automatic Storage Management options
opt/oracle >  sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Wed Feb 22 17:56:09 2012

Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning and Automatic Storage Management options

SQL> select sys_context('USERENV','SESSION_USER') current_user,sys_context('USERENV','SESSION_SCHEMA') current_schema from dual;

CURRENT_USER
--------------------------------------------------------------------------------
CURRENT_SCHEMA
--------------------------------------------------------------------------------
SYS
SYS


So, if you exit from SQL and reconnect, your default schema name is no more DEMO.


NOTE:  The same work can be done using the TOAD by setting the current Schema to Demo in the Schema Browser page, it is quiet easier if you have Toad




Monday, February 20, 2012

Datapump SQLFILE parameter during Import



NOTE:   SQLFILE parameter works only for import(impdp) and should have dumpfile available.


For the past few days I have been dealing with the data pump export and imports for my Migration project on LINUX servers RHEL 5.4 and I learnt lot of new things after oozing out sweat. Today I created index creation DDL script with my existing datapump Dump files using SQLFILE parameter.

SQLFILE:  This parameter enables you to extract the DDL from the export/Import dump file, without Exporting/importing any data. SQLFILE includes DDLs for tablespaces, users, role grants, packages, procedures, functions, tables, indexes, primary and foreign keys, etc.
                               The INCLUDE parameter allows to target the DDLs you are interested in. for example if Specify INCLUDE=USER will give you CREATE USER statements.

SQL>  select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 – Production

Since here in this example shown I already have dumpfiles (Dump files which are created for schema refresh) and now I’m creating an index creation DDL script to create indexes on my schema tables

NOTE:   When you execute impdp with sqlfile option it won’t import the data into the actual tables or into the schema.

vi  ind_sqlfile.par

DIRECTORY=DPUMP_OMEGA_DIR1
DUMPFILE=DPUMP_OMEGA_DIR1:SYSFM_%U.dmp
LOGFILE=LOGFILE_OMEGA_dir1:create_ddl.log
PARALLEL=10
INCLUDE=INDEX
SCHEMAS=SYSFM
SQLFILE=createidx.sql


$ impdp sacorp/********  parfile=ind_sqlfile.par

Import: Release 11.2.0.2.0 - Production on Fri Feb 17 20:45:46 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters and Automatic Storage Management options
Master table "SACORP"."SYS_SQL_FILE_SCHEMA_01" successfully loaded/unloaded
Starting "SACORP"."SYS_SQL_FILE_SCHEMA_01":  sacorp/********  parfile=ind_sqlfile.par
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Job "SACORP"."SYS_SQL_FILE_SCHEMA_01" successfully completed at 20:56:50

Ran successfully and 'createidx.sql' file has created in the specified dump directory location.

Datapump Export Schema statistics only


Exporting Schema statistics only from one server to another server can be done in 2 ways

METHOD 1:  using DATAPUMP

Hello Every one today I did Data pump export of only Schema META DATA(entire schema objects structure ) without moving actual data, to do so use CONTENT=METADATA_ONLY parameter to get only schema metadata.

CONTENT={ALL | DATA_ONLY | METADATA_ONLY}
·         ALL unloads both data and metadata. This is the default.
·         DATA_ONLY unloads only table row data; no database object definitions are unloaded.
·         METADATA_ONLY unloads only database object definitions; no table row data is unloaded.


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
PL/SQL Release 11.2.0.1.0 - Production
CORE    11.2.0.1.0      Production
TNS for Linux: Version 11.2.0.1.0 - Production
NLSRTL Version 11.2.0.1.0 – Production


vi expdp_metadata.par

DIRECTORY=DPUMP_OMEGA_DIR1
DUMPFILE=SYSFM_META_%U.dmp
LOGFILE=LOGFILE_OMEGA_DIR1:metadata.log
PARALLEL=16
SCHEMAS=SYSFM
CONTENT=METADATA_ONLY
JOB_NAME=metadata_expdp

$ expdp  SYSFM/********  parfile=expdp_metadata.par

Export: Release 11.2.0.1.0 - Production on Fri Feb 17 21:31:00 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management and Real Application Testing options
Starting "SYSFM"."METADATA_EXPDP":  SYSFM/******** parfile=expdp_metadata.par
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/DB_LINK
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/COMMENT
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/VIEW/VIEW
Processing object type SCHEMA_EXPORT/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/TRIGGER
Processing object type SCHEMA_EXPORT/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Master table "SYSFM"."METADATA_EXPDP" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSFM.METADATA_EXPDP is:
  /var/backup/omega/SYSFM_META_01.dmp
Job "SYSFM"."METADATA_EXPDP" successfully completed at 22:52:11

So SYSFM_META_01.dmp  dump file has been created in the DUMP Directory ‘/var/backup/omega’.

To check the status of the data pump job while running

SQL> select  OWNER_NAME,JOB_NAME,STATE  from  dba_datapump_jobs  where  JOB_NAME = 'METADATA_EXPDP';

Copy the dumpfile to the destination server and Perform similar steps with the import option.
If we want to export all the metadata Except few things then add 'EXCLUDE' parameter in the parfile and execute it Ex: Exclude = GRANTS,STATISTICS

METHOD 2:  using dbms_stats package

Step 1:
On the Source side create a stat table to hold the schema statistics using dbms_stats.create_stat_table(‘<SCHEMANAME>’,’<TABLENAME>’)

SQL> Conn sysfm/******
Connected.

SQL> EXEC DBMS_STATS.create_stat_table('SYSFM','STATS');
PL/SQL procedure successfully completed.

Step2:  
Export the SYSFM schema statistics into the table STATS.

begin
dbms_stats.export_schema_stats(ownname => 'SYSFM',stattab => 'STATS');
end;
/

Now all statistics from the SYSFM schema are stored in the table named “STATS”. The table must be transferred to the destination schema

Step3:  
Now move the STATS table which has created to the destination server by using expdp or exp utility or database link.

After copying the dumpfile into the destination server Import the STATS table into the same schema ‘SYSFM’

begin
exec dbms_stats.import_schema_stats(ownname => 'SYSFM',stattab => 'STATS');
 end;
 /
PL/SQL procedure successfully completed.


Friday, February 17, 2012

ORA-01654: unable to extend ..........


While doing Datapump Import operation on RHEL 5.4 server everything went good but at the end of the job import operation experienced resumable wait with an error mentioned below.

ERROR:
ORA-01654: unable to extend index SYS.I_HH_OBJ#_COL# by 128 in tablespace SYSTEM
ORA-39171: Job is experiencing a resumable wait.

Solution :
Check the size and maxsize of the data files in the SYSTEM tablespace

SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 – Production

SQL> select file_name, bytes, autoextensible, maxbytes from dba_data_files where tablespace_name='SYSTEM';

SQL> select sum(bytes)/1024/1024 MB from dba_free_space  where TABLESPACE_NAME='SYSTEM';

The SYSTEM tablespace has no space to allocate any more in it then I increased the size of the SYSTEM’s data file then my problem got solved.

SQL> alter database datafile '<Path to data file>' resize <larger size> ;

Thursday, February 16, 2012

ORA-31655,ORA-39154 while Datapump Import


Today while I’m performing SYSFM schema import using MYCORP user on my test server

$ cat /etc/redhat-release
Red Hat Enterprise Linux Server release 5.4 (Tikanga)

I already have the Dump files from prod server and ready to do import in test server with my parfile.

Vi  sysfm_impdp.par

DIRECTORY=DPUMP_OMEGA_DIR1
DUMPFILE=DPUMP_OMEGA_DIR1:SYSFM_%U.dmp
LOGFILE=LOGFILE_OMEGA_dir1:sysfm_impdp.log
PARALLEL=10
SCHEMAS=SYSFM
JOB_NAME=sysfm_Import


Impdp  mycorp/*****  parfile=sysfm_impdp.par
Import: Release 11.2.0.2.0 - Production on Thu Feb 16 15:45:25 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.
Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, Real Application Clusters and Automatic Storage Management options
ORA-31655: no data or metadata objects selected for job
ORA-39154: Objects from foreign schemas have been removed from import
Master table "MYCORP"."SYSFM_IMPORT”successfully loaded/unloaded
Starting "MYCORP"."SYSFM_IMPORT":  mycorp/********  parfile=sysfm_impdp.par
Job "MYCORP"."SYSFM_IMPORT" successfully completed at 15:47:11

I checked the database and found no schema has imported. Then after struggling for some time I came to know the solution for above error as the "user have no privileges" to perform on the another user then I granted ‘IMP_FULL_DATABASE’ to the MYCORP user from which I’m performing datapump schema import operation.

SQL> select  banner  from  v$version;
BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 – Production

SQL> grant IMP_FULL_database to  MYCORP;
Grant succeeded.

And started import again, Now the Import operation has running successfully.
Check dba_datapump_jobs  to confirm import  job is running

SQL> select OWNER_NAME,JOB_NAME,STATE from dba_datapump_jobs;

OWNER_NAME          JOB_NAME                STATE
--------------------       --------------------    --------------------
MYCORP                     SYSFM_IMPORT         EXECUTING


Tuesday, February 14, 2012

ORA-31626, ORA-31633 while running Datapump job


Today I’m doing schema refresh from dev to test server in my LINUX environment, I placed all the parameters in a parfile(schema_expdp.par) and put that parfile in shell script(expdp_SYS.sh) and started running with NOHUP command.

$nohup sh expdp_SYS.sh >schema.out&

In the schema.out (log file for nohup job used for running expdp job) I found the below error:

ORA-31626: job does not exist
ORA-31633: unable to create master table "SYSTEM.FULL_EXPDP_DEVDB"

ORA-06512: at "SYS.DBMS_SYS_ERROR", line 95

ORA-06512: at "SYS.KUPV$FT", line 1020
ORA-00955: name is already used by an existing object

After some research, I found the cause of this error as given below:

Cause: This error occurred because Job creation failed because a Master Table(A table created when a Datapump job is started and dropped when a Datapump job is completed) and its indexes could not be created,most commonly due to the pre-existence of a table with the same name (job name) in the user schema

Solution:  drop table sys.job_name and everything returns to normal or  change the datapump JOB_NAME.

I decided to Change the JOB_NAME in the expdp script and run the expdp script once again with the help of the 'nohup' command as given below:


$vi schema_expdp.par

DIRECTORY=DPUMP_DIR1
DUMPFILE=SYSFM_%U.dmp
LOGFILE=LOGFILE_DIR1:sysfm.log
PARALLEL=16
SCHEMAS=SYSFM
JOB_NAME=newsysfm_expdp                        (it was sysfm_expdp, now newsysfm_expdp)

:wq

$nohup sh expdp_sys.sh>schema.out&

This time the 'nohup' command ran successfully, I checked the 'expdp' job it started successfully and the log file also looks fine.

$ps -ef|grep expdpDatapump(expdp) job found running

Checked the logfile in the location specified and found successfully running


EXPLANATION :
The key line is highlighted in red - an ORA-00955. The expdp creates a “temporary” table into which it writes lots of information about the export, which it removes at the end of the job.
If you look in the schema which you are using to perform the datapump, you will find a table with the same name as the job. This was left behind from the rather sudden system shutdown.
The solution?
drop table sys.job_name and everything returns to normal or change the JOB_NAME.

NOTE :  
It is worth considering job names for datapump – make sure there isn’t already an object within the database with that name or you’ll hit the same problem. 

Block change tracking (BCT)


With Oracle 10gr1 Enterprise Edition Oracle introduced Block change tracking (BCT) feature which is used to FAST / Speed up RMAN Incremental Backup. In Oracle 9i we could create incremental backups with level 0 to 4, level 0 backup is a full backup. In Oracle 10g there are still these levels but we only use incremental level 0 and 1 backups with Oracle’s suggested backup strategy.

Before oracle 10g, During RMAN Incremental backup oracle used to read every block in database and compare the SCN in the block with the SCN in the base backup. If the block’s SCN is greater than the SCN in the backup then the block is a candidate for the New Incremental backup.  But with this BCT feature oracle instead of reading the entire database blocks it just directly reads only changed blocks results in saving lot of time.

Once BCT is enabled; this new 10g feature records the modified since last backup and stores the log of it in a block change tracking file. During backups RMAN uses the log file to identify the specific blocks that must be backed up. This improves RMAN's performance as it does not have to scan whole datafiles instead to detect changed blocks.

Logging of changed blocks is performed by the CTWR (change tracking writer) process which is also NEW background process introduced in 10g and responsible for writing data to the BCT file.  By default, the Block change tracking file is created as Oracle managed file in DB_CREATE_FILE_DEST specified location.

By default, Oracle will not record block change information. You can Enable or disable the change tracking when the database is open or mounted.

Enable Block Change tracking and set location

SQL> alter system set db_create_file_dest='location' SCOPE=BOTH;
SQL> alter database enable block change tracking;

Manually specify location for the block change tracking
SQL>alter database enable block change tracking using file 'location';

Disable block change tracking
SQL> alter database disable block change tracking;

Moving Block Change tracking file without Database shutdown:
If your database is 24x7 critical production and you cannot shut down, then follow the below steps. Please note that you must disable change tracking and you will lose the old contents of the change tracking file, if you choose this method.

Step1: Disable the change tracking
SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;

Step2: Re-enable it at the new location
SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE 'new_location';

Moving Block Change tracking file with Database shutdown:
Step1: Determine the change tracking file:
SQL> SELECT filename FROM
 V$BLOCK_CHANGE_TRACKING;

Step2: shutdown and move or copy the tracking file.
SQL> SHUTDOWN IMMEDIATE
$ cp ‘/old_lockation/block_change_tracking.ora’ ‘/new_location/ block_change_tracking.ora’

Step3: Mount the database and rename change tracking file to new location.
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE RENAME FILE ‘old_location’ TO ‘new_location’;

Step4: Open the database
SQL> ALTER DATABASE OPEN;

Block change tracking in RAC (Real Applications Clusters) environment, the file must be located on shared storage so that the file is accessible for all the nodes. BCT file is one per db, therefore BCT file is created in location defined by parameter db_create_file_dest

SQL> desc    v$block_change_tracking;

Name                        Null?                      Type
-------------------      --------------             ------------------
STATUS                                                 VARCHAR2(10)
FILENAME                                            VARCHAR2(513)
BYTES                                                    NUMBER


Change Tracking Writer (CTWR)

As data blocks change, the Change Tracking Writer (CTWR) background process tracks the changed blocks in a private area of memory.

When a commit is issued against the data block, the block change tracking information is copied to a shared area in Large Pool called the CTWR buffer. During the checkpoint, the CTWR process writes the information from the CTWR RAM buffer to the change-tracking file.


You can view the size of the CTWR dba buffer by looking at v$sgastat
SQL> SELECT *
 FROM v$sgastat  WHERE name like 'CTWR%'; 


Friday, February 10, 2012

High Water Mark (HWM)


Oracle uses the high water mark to identify the highest amount of space used by a particular segment. It acts as the boundary between used and unused space. As the amount of data grows due to row inserts and updates, the segment's high water mark grows accordingly. But when you delete the Rows then HWM cannot come down automatically, extra steps needs to done to bring down HWM.

The HIGH WATER MARK is set at the beginning of the segment when the table is created. Deleting rows will never reset the HWM even if all the rows are deleted from the table. 

To reduce the high water mark:
a. Take export of table and re-import back
b. Truncate the table (only if you want to remove data/rows completely)

Suppose we create an empty table, the high-water mark would be at the beginning of the table segment


X
Unused space
HWM

When you insert rows into the table the high watermark will be bumped up step by step. This is done by the server process which makes the inserts.

                           
Used DATA/Rows
X
Un used Space
                                                               HWM                                                            
Now if we insert a table with 10,000 rows. And let’s assume that we deleted 5,000 rows later.        
                    
Used data
Empty blocks
X
Un used space
                           <--------------------------------------------------------------->HWM       
                                 Full table scan

  As you seen above by deleting the data, HWM does not move.  The main disadvantage of this is that oracle always read the blocks up to high water mark in case of full table scan.  You may have ever notice that doing a count (*) on empty table, takes time to show you 0 rows.  The reason for delay is setting of HWM at higher position.

NOTE:  Whenever optimizer takes full table scan, it scans all the blocks below HWM. This would degrade the performance and slowdown the Full table scan process. To avoid this, we need to shrink the table to reset the HWM.

So, now how we set the high-water mark at lower position?
The only way to set the HWM is to truncate a table. Let us see how truncate set the HWM.


No data in the segment
                                 

HWM

HWM is reset now, after truncating data.   

HWMBLOCKS = “Blocks Containing data” + “Empty blocks”

         HWMBLOCKS – “Blocks containing data”
HWM % = (----------------------------------)  * 100
                  HWMBLOCKS

You can also use DBMS_SPACE.UNUSED_SPACE procedure to determine HWM.

High Watermark = Total blocks - Unused blocks – 1

SQL> select blocks from dba_segments where owner=upper('HR') and segment_name = upper('JOBS');

SQL> analyze table hr.jobs estimate statistics;

SQL> select empty_blocks from dba_tables where owner=upper('HR') and table_name = upper('jobs');
Ways for Resetting HWM………..
Option 1
Alter table  tab_name  move tablespace
This option requires rebuilding of indexes. Since the index will be invalid/unusable after running the above command. Also users cannot use the application or reports while rebuilding the index. Make sure you stop Goldengate if this table is in extract process otherwise it will go ABEND (because of Unusable indexes)

Option 2
1. Export the data
2. Truncate the table
3. Import the table
4. Analyze the table
  Option 3
  1. Copy the table data
  2. Truncate the original table
  3. Insert back.
  
    Option 4
    Use DBMS_REDEFINITION package to copy the table (redefining the Table structure)

People confused about setting of HWM through ALTER TABLE DEALLOCATE UNUSED clause. This clause only frees unused space above the high water mark but cannot reset HWM position.

To determine the exact number of blocks that contain data i.e. space used by table below the high water mark, query rowid and get the first and last used block from the rowid.

SQL> select count (distinct dbms_rowid.rowid_block_number(rowid)) "used blocks" from table_name;

Starting with ORACLE 10gR1 we can use a new feature for adjusting the high watermark,
it is called segment shrinking and is only possible for segments which use ASSM, in other words, which are located in tablespaces which use Automatic Segement Space Management.

How to View High Water Mark - Step-by-Step Instructions (Doc ID 262353.1)


Auto Scroll Stop Scroll