Thursday, July 26, 2012

Exporting(expdp) & Importing(impdp) specified objects Only


EXPORT/IMPORT only schema Tables :

Expdp system/******  dumpfile=XXXXX.dmp directory=EXPORT_DIR  schemas=DEMO    include=TABLE

generating a DDL for above Tables :

impdp system/*****  dumpfile=xxxxx.dmp directory=EXPORT_DIR schemas=DEMO sqlfile=table_DDL.sql 

If we need only for 1 table DDL from entire Tables then,

impdp system/*****  dumpfile=xxxxx.dmp directory=EXPORT_DIR schemas=DEMO sqlfile=tables_1.sql   include=TABLE:\"IN \(\'CUSTOMER\'\)\"

The above command creates a file tables_1.sql which contains only DEMO.CUSTOMER table DDL


NOTE:  SQLFILE parameter works only for import(impdp) and should have dumpfile available


Export and Import Only CONSTRAINTS:

Expdp system/wfargo#123 dumpfile=xxxxx.dmp directory=EXPORT_DIR   schemas= DEMO include=CONSTRAINT

Here in the above I’m exporting only DEMO schema constraints

If we want this DEMO schema DDL in a file then run the command below:

impdp system/****** dumpfile=XXXXX.dmp directory=EXPORT_DIR  schemas=DEMO   SQLFILE=constraint_DDL.sql 

Then a file constraint_DDL.sql will be created contains all DEMO schema constraints


You can find Creating a DDL for an INDEX here :


Monday, July 16, 2012

Oracle TKPROF


The TKPROF (Transient Kernel Profiler) is an Oracle database utility which converts Oracle trace files into a more human readable form.

Recently a developer wants me to send his trace file output for a particular trace file in a flatfile
Then I performed the below steps:

SQL> show parameter diag;

NAME                                     TYPE         VALUE
---------------------------------- -----------   ------------------------------
diagnostic_dest                    string      /opt/oracle/app

Then go to diag location and go to the trace location and perform the following

$ tkprof  DEV71_ora_24961_INFO_PKG.trc   dba.txt   sys=no

Here in the above I’m putting the trace file output in a dba.txt file

Tkprof parameters:

TRACEfile : The name of the TRACE file containing the statistics by SQL_TRACE.
Output_file: The name of the file where TKPROF writes its output.
SORT= parameters
The order in which to display the statements in the output. There are about 20 different options for sorting the output-you can even combine these options.
PRINT=number
The number of statements to include in the output. included, TKPROF will list all statements in the output.
EXPLAIN=username/password@sid
Run the EXPLAIN PLAN on the user's SQL statements in the TRACE file. This option will create a PLAN_TABLE of its own, so the user will need to have privileges to create the table and space in which to create it. When TKPROF is finished, this table is dropped.
INSERT= filename
This option creates a script to create a table and store the TRACE file statistics for each SQL statement Traced.
RECORD= filename
This option will produce a file of all the user's SQL statements.
SYS= YES/NO
This option allows the user to request the recursive SQL statements not be displayed in the output. The default is set to YES.
Sort – It will be useful if we want to see the top SQL which are consumed the most resources. Resource like CPU usage, disk usage etc.

The following are the data elements available for sorting:

. prscnt – The number of times the SQL was parsed.
· prscpu – The CPU time spent parsing.
· prsela – The elapsed time spent parsing the SQL.
· prsdsk – The number of physical reads required for the parse.
· prsmis – The number of consistent block reads required for the parse.
· prscu – The number of current block reads required for the parse.
· execnt – The number of times the SQL statement was executed.
· execpu – The CPU time spent executing the SQL.
· exeela – The elapsed time spent executing the SQL.
· exedsk – The number of physical reads during execution.
· exeqry – The number of consistent block reads during execution.
· execu – The number of current block reads during execution.
· exerow – The number of rows processed during execution.
· exemis – The number of library cache misses during execution.
· fchcnt – The number of fetches performed.
· fchcpu – The CPU time spent fetching rows.
· fchela – The elapsed time spent fetching rows.
· fchdsk – The number of physical disk reads during the fetch.
· fchqry – The number of consistent block reads during the fetch.
· fchcu – The number of current block reads during the fetch.
· fchrow – The number of rows fetched for the query.

NOTE:

The TKPROF utility puts a TRACED output into a readable format. Without running TKPROF, it would be difficult to read the output of a TRACE. By specifying "explain=username/password" (noted earlier), we are able to get the EXPLAIN PLAN execution path in addition to the execution statistics of the query


ORA-01127, ORA-01163 during database creation


In my earlier post i mentioned how to create/add another database in an existing server (http://www.chandu208.blogspot.com/2012/07/add-another-database-on-existing-server.html). Again today i'm creating new instance on an existing server and hit an error

ERROR: ORA-01127: database name 'RCDEV' exceeds size limit of 8 characters
CAUSE: The database name mentioned above exceeds the limit set
SOLUTION: change the database name to 8 or less characters

Then changed the name to 8 characters
Then error resolved

Now, running the control file script to create controlfiles for new database
SQL > @con.ctl
CREATE CONTROLFILE REUSE DATABASE "RCDEV" NORESETLOGS ARCHIVELOG
*
ERROR at line 1:
ORA-01503: CREATE CONTROLFILE failed
ORA-01163: SIZE clause indicates 12800 (blocks), but should match header 640
ORA-01110: data file 4: '/OPT/ORACLE/ORADATA/RCDEV/USERS01.DBF'

Vi con.ctl

CREATE CONTROLFILE SET DATABASE "RCDEV" RESETLOGS  NOARCHIVELOG
    MAXLOGFILES 16
    MAXLOGMEMBERS 3
    MAXDATAFILES 100
    MAXINSTANCES 8
    MAXLOGHISTORY 292
LOGFILE
  GROUP 1 '/opt/oracle/oradata/ RCDEV/redo01.log'  SIZE 1024M BLOCKSIZE 512,
  GROUP 2 '/opt/oracle/oradata/ RCDEV/redo02.log'  SIZE 1024M BLOCKSIZE 512,
  GROUP 3 '/opt/oracle/oradata/ RCDEV/redo03.log'  SIZE 1024M BLOCKSIZE 512,
  GROUP 4 '/opt/oracle/app/fast_recovery_area/ RCDEV/onlinelog/o1_mf_4_7zscnkom_.log'  SIZE 100M BLOCKSIZE 512
-- STANDBY LOGFILE
DATAFILE
  '/opt/oracle/oradata/ RCDEV/system01.dbf',
  '/opt/oracle/oradata/ RCDEV/sysaux01.dbf',
  '/opt/oracle/oradata/ RCDEV/undotbs01.dbf',
  '/opt/oracle/oradata/ RCDEV/users01.dbf',                 ß Comma leads to an error
CHARACTER SET WE8MSWIN1252
;


Then deleted that ‘comma’ and reran the con.ctl script then controlfiles have been created.

Friday, July 13, 2012

Catproc is invalid after database creation


Recently I created new database instance on my Linux oracle 11gr2 existing server(see here: http://chandu208.blogspot.com/2012/07/add-another-database-on-existing-server.html ), 
after creating it I ran below oracle scripts

@$ORACLE_HOME/rdbms/admin/catalog.sql
@$ORACLE_HOME/rdbms/admin/catproc.sql

And bounced the database
Then I query for invalid objects:

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Fri Jul 13 14:21:34 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select count(*) from dba_objects where status='INVALID';

  COUNT(*)
----------
        45

SQL> SELECT comp_id, version, status FROM dba_registry;

COMP_ID                        VERSION                        STATUS
------------------------------ ------------------------------ -----------
OWB                              11.2.0.2.0                     VALID
APEX                             3.2.1.00.12                    VALID
EM                                11.2.0.2.0                     VALID
AMD                            11.2.0.2.0                     VALID
SDO                              11.2.0.2.0                     VALID
ORDIM                          11.2.0.2.0                     VALID
XDB                               11.2.0.2.0                     VALID
CONTEXT                        11.2.0.2.0                     VALID
EXF                                11.2.0.2.0                     VALID
RUL                                11.2.0.2.0                     VALID
OWM                            11.2.0.2.0                     VALID

COMP_ID                        VERSION                        STATUS
-------------------------- ------------------------------ -----------
CATALOG                        11.2.0.2.0                     VALID
CATPROC                    11.2.0.2.0                     INVALID
JAVAVM                         11.2.0.2.0                     VALID
XML                               11.2.0.2.0                     VALID
CATJAVA                        11.2.0.2.0                     VALID
APS                                 11.2.0.2.0                     VALID
XOQ                               11.2.0.2.0                     VALID

18 rows selected.

In the above we see my CATPROC is invalid, then ran utlrp.sql script to validate objects
$ORACLE_HOME/rdbms/admin/utlrp.sql
But still same then i did some research and found the solution

SQL> exec DBMS_REGISTRY_SYS.VALIDATE_CATPROC;
PL/SQL procedure successfully completed.

SQL> select count(*) from dba_objects where status='INVALID';

  COUNT(*)
----------
         0

SQL> SELECT comp_id, version, status FROM dba_registry;

COMP_ID                        VERSION                        STATUS
------------------------------ ------------------------------ -----------
OWB                            11.2.0.2.0                     VALID
APEX                           3.2.1.00.12                    VALID
EM                             11.2.0.2.0                     VALID
AMD                            11.2.0.2.0                     VALID
SDO                            11.2.0.2.0                     VALID
ORDIM                          11.2.0.2.0                     VALID
XDB                            11.2.0.2.0                     VALID
CONTEXT                    11.2.0.2.0                     VALID
EXF                            11.2.0.2.0                     VALID
RUL                            11.2.0.2.0                     VALID
OWM                            11.2.0.2.0                     VALID

COMP_ID                        VERSION                        STATUS
------------------------------ ------------------------------ -----------
CATALOG                        11.2.0.2.0                     VALID
CATPROC                    11.2.0.2.0                     VALID
JAVAVM                         11.2.0.2.0                     VALID
XML                              11.2.0.2.0                     VALID
CATJAVA                        11.2.0.2.0                     VALID
APS                               11.2.0.2.0                     VALID
XOQ                              11.2.0.2.0                     VALID

18 rows selected.

Add another database on an existing server


Log in to the existing database and created pfile using existing spfile or pfile, then edit this pfile with the new instance name
Then create passwd file using ORAPWD Utility.
$ orapwd file=/opt/oracle/app/11.2.0.2/dbs/orapwRCDEV  password=oracle
Password file resides in $ORACLE_HOME/dbs location
Important thing is to backup the controlfile to trace, then a trace file will be created in the tracefile location

SQL >alter database backup controlfile to trace;

Copied the trace file to new location and edit as shown below and i saved it as control.ctl file

CREATE CONTROLFILE REUSE DATABASE "OLDDB" NORESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/u03/oradata/oldlsq/log1a.dbf',
'/u03/oradata/olslsq/log1b.dbf') SIZE 30M,
GROUP 2 ('/u04/oradata/oldlsq/log2a.dbf',
'/u04/oradata/oldlsq/log2b.dbf') SIZE 30M
DATAFILE
'/u01/oradata/oldlsq/system01.dbf',
'/u01/oradata/oldlsq/mydatabase.dbf'
;

edit as ...........

CREATE CONTROLFILE SET DATABASE "NEWDB" RESETLOGS
NOARCHIVELOG
MAXLOGFILES 16
MAXLOGMEMBERS 2
MAXDATAFILES 240
MAXINSTANCES 1
MAXLOGHISTORY 113
LOGFILE
GROUP 1 ('/u03/oradata/newdb/log1a.dbf',
'/u03/oradata/newdb/log1b.dbf') SIZE 30M,
GROUP 2 ('/u04/oradata/newdb/log2a.dbf',
'/u04/oradata/newdb/log2b.dbf') SIZE 30M
DATAFILE
'/u01/oradata/newdb/system01.dbf',
'/u01/oradata/newdb/mydatabase.dbf'
;


replace old sid with new sid

create all directories specified in the pfile

Shut down the old database and copy the datafiles to new SID datafiles location. Here I don’t have ASM, if we have ASM configured then check the location and copy

NOTE :     check for the tablespaces

Export ORACLE_SID=newinstance
Check with echo $ORACLE_SID

Sqlplus / as sysdba
SQL*Plus: Release 11.2.0.2.0 Production on Fri Jul 13 14:21:34 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL > 


SQL > startup pfile=’/../../initsid.ora’
SQL> @control.ctl                    (this script created using control file trace file)
SQL >  Alter database open resetlogs;
SQL > shut immediate
SQL > startup pfile=’/../../initsid.ora’

So your database is create. Now just run the catalog.sql and catproc.sql scripts.
You will find the in $ cd $ORACLE_HOME/rdbms/admin


@$ORACLE_HOME/rdbms/admin/catalog.sql
@$ORACLE_HOME/rdbms/admin/catproc.sql

Then the oracle Dictionaries and packages will be created

***** Must BOUNCE the database

After the instance has been created, verify

$ sqlplus / as sysdba

SQL*Plus: Release 11.2.0.2.0 Production on Fri Jul 13 14:21:34 2012
Copyright (c) 1982, 2010, Oracle.  All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL > select dbid, name, instance_name, host_name, created, log_mode, open_mode from v$database, v$instance;

Add listener and TNS entries:
Go to $ORACLE_HOME/network/admin, edit listener and tnsnames.ora

Start the new listener using
SQL>  lsnrctl start new_listener

Check using $ ps –fu oracle |grep tns      -->   to see all running listeners

Friday, July 6, 2012

How to Extract schema and Tablespace DDL's using Datapump SQLFILE parameter


NOTE:  To Generate DDL Statements you should have a DUMPFILE available.

Here I have a Full database Dump,

SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 – Production


Extracting schemas DDL using SQLFILE :


Vi  schema_DDL.par

directory=EXPORT_DIR
dumpfile=DEV31_metadata.dmp
logfile=schemas_meta.log
SQLFILE=schemas_meta.sql
SCHEMAS=Schema1,Schema2,Schema3
CONTENT=METADATA_ONLY

oracle@dev301:/opt/DEV31$  IMPDP  SYSTEM/******  parfile=schema_DDL.par  

Import: Release 11.2.0.2.0 - Production on Tue Jul 3 20:37:15 2012

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
Master table "SYSTEM"."SYS_SQL_FILE_SCHEMA_01" successfully loaded/unloaded
Starting "SYSTEM"."SYS_SQL_FILE_SCHEMA_01":  system/******** parfile=c_schemas.par
Processing object type DATABASE_EXPORT/SCHEMA/USER
Processing object type DATABASE_EXPORT/SCHEMA/GRANT/SYSTEM_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/ROLE_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/DEFAULT_ROLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLESPACE_QUOTA
Processing object type DATABASE_EXPORT/SCHEMA/DB_LINK
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/SEQUENCE
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/SEQUENCE/AUDIT_OBJ
Processing object type DATABASE_EXPORT/SCHEMA/PROCACT_SCHEMA
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TABLE
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/AUDIT_OBJ
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/PACKAGE_SPEC
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/AUDIT_OBJ
Processing object type DATABASE_EXPORT/SCHEMA/FUNCTION/ALTER_FUNCTION
Processing object type DATABASE_EXPORT/SCHEMA/PROCEDURE/ALTER_PROCEDURE
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/VIEW
Processing object type DATABASE_EXPORT/SCHEMA/VIEW/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type DATABASE_EXPORT/SCHEMA/PACKAGE_BODIES/PACKAGE/PACKAGE_BODY
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/FUNCTIONAL_AND_BITMAP/INDEX
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/INDEX/STATISTICS/FUNCTIONAL_AND_BITMAP/INDEX_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/STATISTICS/TABLE_STATISTICS
Processing object type DATABASE_EXPORT/SCHEMA/TABLE/TRIGGER
Processing object type DATABASE_EXPORT/SCHEMA/JOB
Processing object type DATABASE_EXPORT/SCHEMA/POST_SCHEMA/PROCOBJ
Job "SYSTEM"."SYS_SQL_FILE_SCHEMA_01" successfully completed at 20:38:02


Then schemas_meta.sql file is created .

Extracting Database Tablespace DDL using SQLFILE

Parfile:  Vi  Tablespace.par

Dumpfile=DEV31_metadata.dmp
directory=EXPORT_DIR
logfile=tablespace.log
SQLFILE=TABLESPACE.sql
INCLUDE=TABLESPACE
FULL=Y
CONTENT=METADATA_ONLY


oracle@dev301:/opt/DEV31$  IMPDP  SYSTEM/******  parfile= Tablespace.par

Then TABLESPACE.sql will be created.


Creating TABLE & INDEX DDL :

Impdp   username/passwd   Directory=dir_name  Dumpfile=dumpfile_name.dmp  SQLFILE=demo_ddl.sql INCLUDE=TABLE,INDEX


Tuesday, July 3, 2012

ORA-19815: WARNING: db_recovery_file_dest_size of 4227858432 bytes is 100.00% used


In any oracle environment this is most common error you see when you not plan to delete the archive logs periodically for the databases which are running on ARCHIVE LOG mode. This is very Dangerous when happens in PRODUCTION database entire Database will get hung and which we dont want.

I'have faced this error on one of my non-prod env where there is no proper care has been care for archive log deletion because of this all my SQL views getting hung on my LINUX standalone database, if I query any view then the output is displaying nothing. Checked the alert log and found this error below

Mon Jul 02 19:06:06 2012
Errors in file /opt/oracle/app/diag/rdbms/dev7/DEV7/trace/DEV7_arc3_17880.trc:
ORA-19815: WARNING: db_recovery_file_dest_size of 4227858432 bytes is 100.00% used, and has 0 remaining bytes available.
************************************************************************
You have following choices to free up space from recovery area:
1. Consider changing RMAN RETENTION POLICY. If you are using Data Guard,
   then consider changing RMAN ARCHIVELOG DELETION POLICY.
2. Back up files to tertiary device such as tape using RMAN
   BACKUP RECOVERY AREA command.
3. Add disk space and increase db_recovery_file_dest_size parameter to
   reflect the new space.
4. Delete unnecessary files using RMAN DELETE command. If an operating
   system command was used to delete files, then use RMAN CROSSCHECK and
   DELETE EXPIRED commands.
************************************************************************
ARC3: Error 19809 Creating archive log file to '/opt/oracle/app/fast_recovery_area/DEV7/archivelog/2012_07_02/o1_mf_1_26_%u_.arc'


SQL> select * from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production
PL/SQL Release 11.2.0.2.0 - Production
CORE    11.2.0.2.0      Production
TNS for Linux: Version 11.2.0.2.0 - Production
NLSRTL Version 11.2.0.2.0 – Production

SQL> show parameter db_recovery_file_dest ;

NAME                                           TYPE           VALUE
------------------------------------   -----------    ------------------------------
db_recovery_file_dest                string         /opt/oracle/app/fast_recovery_area
db_recovery_file_dest_size      big integer   4G

go to RMAN and do crosscheck to verify any expired archive logs

Rman>  crosscheck archivelog all;
Output displayed 3 expired .arc logs

Rman>   delete expired archivelog all;
This will delete the Expired archivelogs

(OR)

If there are NO expired archive logs then backup the archive logs from the 'db_recovery_file_dest ' location to some other location (like TAPE) then delete archivelogs

If you want to delete archive logs older than ‘x’ days, then


RMAN> delete archivelog until time 'SYSDATE-7';       (deleting older than 7 days)

NOTE : Remember while deleting arch logs that delete only if you dont want these arch logs like for the upstream process, standby, etc and have backed up already
Now, my sql queries are working fine………..

Since my db_recovery_file_dest_size is less, then I increased it to high value to avoid this problem again in the future

SQL>  ALTER SYSTEM SET DB_RECOVERY_FILE_DEST_SIZE = 50G scope=both;

SQL> show parameter db_recovery_file_dest ;

NAME                                           TYPE           VALUE
------------------------------------   -----------    ------------------------------
db_recovery_file_dest                string         /opt/oracle/app/fast_recovery_area
db_recovery_file_dest_size      big integer   50G

Max value for db_recovery_file_dest_size is 17179869182G   (in oracle 10.2, 11.1, 11.2)

check space used and max limit using

SQL> select name, space_limit/1024/1024/1024 MaxGB, space_used/1024/1024/1024 UsedGB, (space_used/space_limit)*100 "USED %", number_of_files from v$recovery_file_dest;

name           MaxGB       UsedGB            USED%  number_of_lines
----------   ------------         ------------       ----------       -----------------
dev7       50              5.04552555           9.90               28

Since this is my non-prod we are not worried on arch logs but if it is a prod environment then always have ARCH backup job either from OEM or some shell script and backup periodically, having said that we also need to have a scheduled cleanup too for the same logs via same oem Or Cronjobs


Auto Scroll Stop Scroll