Wednesday, January 11, 2012

Maximum Datafile Size In Oracle Database


Each Oracle datafile can contain maximum (2^22) i.e., 4194303 (4 Million) data blocks. So maximum file size is 4194303 multiplied by the database block size.

In a database there can have maximum of 65533 data files.
In database, db_block_size can have 2K, 4K, 8K, 16K and 32K
SQL> show  parameter  db_block_size;    gives your data block size.

Block Size | Maximum Datafile Size
---------------------------------------------
2k      4194303 * 2k = 8 GB
4k      4194303 * 4k = 16 GB
8k      4194303 * 8k = 32 GB
16k     4194303 * 16k = 64 GB
32k     4194303 * 32k = 128 GB

In Oracle Database 10g, BIGFILE tablespace was introduced. The BIGFILE tablespace can ONLY have a single datafile, but this datafile can contain maximum (2^32) i.e., 4294967295 (4 billion) data blocks.

Block Size | Maximum Datafile Size
---------------------------------------------
2k     4294967295 * 2k = 8 TB
4k     4294967295 * 4k = 16 TB
8k     4294967295 * 8k = 32 TB
16k    4294967295 * 16k = 64 TB
32k    4294967295 * 32k = 128 TB

Maximum database size= maximum datafile size * maximum datafile can be in a database.

So maximum data file and database size depends on data block size



Tuesday, January 10, 2012

data pump TABLE_EXISTS_ACTION parameter


TABLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | REPLACE}
The default value is "SKIP"


NOTE: If  CONTENT=DATA_ONLY  is specified then the default is "APPEND" not SKIP.

The parameter TABLE_EXISTS_ACTION applies only to the Data Pump Import operation. This parameter is used when you import a table which is already exists in import schema. So if you not use this parameter and impdp found that the table which to be imported is already exist then impdp skip this table from import list.

Now you may be interested about rest of the three values-

APPEND - The import will be done if the table does not have any Primary key or Unique key constraints. If such constraint exists then you need to ensure that append operation does not violate Primary key or Unique key constraints (that means it does not occur data duplication).
It Loads Rows from source and leaves existing rows UNCHANGED.

TRUNCATE - If the table is not a parent table ( i.e, No other table references it by creating foreign key constraint) then it truncate the existing data and load data from dump file. Otherwise data will not be loaded.

REPLACE - This is the most tricky value of TABLE_EXISTS_ACTION parameter. If the importing table is a parent table ( i.e, other table references it by creating foreign key constraint ) then the foreign key will be deleted from child table. All existing data will be replaced with imported data.




Oracle spool command


What is SPOOL ?
Spool Command in ORACLE is used to print data from oracle tables into other files, meaning you can send all the sql outputs into any file you wish to.

How to SPOOL from ORACLE in CSV format ??

Login to sqlplus

Set echo off;
Set Heading off;
Set define Off;
Set feedback Off;
set verify off;
Set serveroutput On;
SET PAGESIZE 5000
SET LINESIZE 120

SQL >   Spool c:\file.csv     (Windows)

SQL >  SELECT COL1||','||COL2||','||COL3 FROM TABLE_NAME;

SQL>  Spool Off;

Set define On;
Set feedback On;
Set heading on;
Set verify on;


Ex:  Recently i written a spool command for making all the tables and indexes max extent sizes to unlimited because lot of tables and indexes max extent size have NULL value

Set echo off;
Set Heading off;
Set define Off;
Set feedback Off;
Set verify off;
Set serveroutput On;
SET PAGESIZE  5000
SET LINESIZE 120

SQL>   Spool   extent.sql

SQL>   select   'alter '||   object_type||’  ‘||object_name||’   '||’ storage (maxextents unlimited);'
            from  dba_objects   where   object_type in ('TABLE','INDEX')   and owner = 'XXX';

spool off

SQL> @extent.sql                       (for executing spool command)

If u didn’t specify anything after the file name(ex: extent  instead of extent.sql) then oracle by default generates output file as ‘.log’ extention(i.e., extent.log)

If we have very few tables in the database instead of writing spool command we can do manually one after another using

SQL >   alter  table  tab_name  move storage (maxextents unlimited);
 Table altered.
Or
SQL>   alter  index  ind_name  move  storage (maxextents unlimited);
 Index altered.

Using single command we can write dynamic sql script to do the same changes for all the objects

NOTE:
In Linux the output can be seen in the Directory from where you entered into the SQLPLUS
In Windows the output file is located where you specified in the spool

APPEND:

If you want the sql output to append into any existing file then you can do the below

login to sqlplus

SQL > spool /opt/oracle/File.log  append




Wednesday, January 4, 2012

Data pump Schema Refresh



Schema refresh is an regular job for any DBA specially during migration projects, so today I decide to write a post about how we do a schema refresh using Data pump.
Assuming here schema (SCOTT) is  being refreshed  from source (PROD) to Target (TEST) on oracle 11g server using SYSTEM user (use can do with any privileged user)

SQL>  select  banner  from  v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 - Production

On Source side (PROD) :
Create a directory or use an existing directory (ex: data_pump_dir) and grant read and write permissions on this directory to user ‘SYSTEM‘  --> If you do as SYS user this grant is not required

SQL >   grant  read, write  on  directory  data_pump_dir  to   system;
Grant Succeeded.


NOTE: Always need to make sure there is enough space to accommodate Dump files

Step 1:   Exporting the data from prod(source) 

$   vi   expdp_refresh_schema.sh

$  expdp  system/****@sourcehostname   dumpfile=expdpschema.dmp   Directory=data_pump_dir    logfile=export.log   schemas= scott

$  nohup  sh  expdp_refresh_schema.sh>refresh_schema.out &

Nohup is NOT mandatory as datapump process always runs on the server

Step 2 :  Copying the dumpfiles from source to target

For copying Dumpfiles from one server to another server we can use either Winscp(Graphical tool for copying files from windows to linux and  vice versa),FTP, SFTP, SCP, etc.

$ scp  expdpschema.dmp   system@TargetHostname:/home/oracle/datapump

Here I’m copying dumpfile from source to the target /home/oracle/datapump  location


Step 3 :  Importing data from dumpfile into target database

Before importing dunpfile into target(TEST) make sure you delete or backup all the objects in that schema, to clear all objects from particular schema run the script from here  

$ impdp  system/****@targethostname   dumpfile=expdpschema.dmp   Directory=data_pump_dir    logfile=import.log   remap_schema= scott:newscott


Step 4 :   Verify target database object counts with source db

SQL>   select   count(*)  from  dba_objects   where  owner=’NEWSCOTT’ ;
SQL>   select  count(*)  from  dba_tables  where  owner =’NEWSCOTT’;

The above results  should be same as that of source  ‘scott’  schema

Check More of Datapump.........
To Kill a running Data pump Job :  http://chandu208.blogspot.com/2011/09/data-pump-scenarios.html
About data pump :  http://chandu208.blogspot.com/2011/04/oracle-data-pump.html



Auto Scroll Stop Scroll