Sunday, July 31, 2011

Changing the Name of a Database

If you ever want to change the name of database or want to change the setting of MAXDATAFILES, MAXLOGFILES, MAXLOGMEMBERS then you have to create a new control file.

Creating a New Control File

Follow the given steps to create a new controlfile
Steps:
             1.      First generate the create controlfile statement
                SQL>alter database backup controlfile to trace;
After giving this statement oracle will write the CREATE CONTROLFILE statement in a trace file. The trace file will be randomly named something like ORA23212.TRC and it is created in USER_DUMP_DEST directory.
           2.      Go to the USER_DUMP_DEST directory and open the latest trace file in text editor. This file will contain the CREATE CONTROLFILE statement. It will have two sets of statement one with RESETLOGS and another without RESETLOGS. Since we are changing the name of the Database we have to use RESETLOGS option of CREATE CONTROLFILE statement. Now copy and paste the statement in a file. Let it be c.sql

           3.      Now open the c.sql file in text editor and set the database name from ica to prod shown in an example below
CREATE CONTROLFILE
   SET DATABASE prod   
   LOGFILE GROUP 1 ('/u01/oracle/ica/redo01_01.log',
                    '/u01/oracle/ica/redo01_02.log'),
           GROUP 2 ('/u01/oracle/ica/redo02_01.log',
                    '/u01/oracle/ica/redo02_02.log'),
           GROUP 3 ('/u01/oracle/ica/redo03_01.log',
                    '/u01/oracle/ica/redo03_02.log')
   RESETLOGS
   DATAFILE '/u01/oracle/ica/system01.dbf' SIZE 3M,
            '/u01/oracle/ica/rbs01.dbs' SIZE 5M,
            '/u01/oracle/ica/users01.dbs' SIZE 5M,
            '/u01/oracle/ica/temp01.dbs' SIZE 5M
   MAXLOGFILES 50
   MAXLOGMEMBERS 3
   MAXLOGHISTORY 400
   MAXDATAFILES 200
   MAXINSTANCES 6
   ARCHIVELOG;

             4.      Start and do not mount the database.
SQL>STARTUP NOMOUNT;

             5.      Now execute c.sql script
SQL> @/u01/oracle/c.sql

      6.    Now open the database with RESETLOGS
SQL>ALTER DATABASE OPEN RESETLOGS;

Sunday, July 24, 2011

Data guard GAP Detection and Resolution



Overview:

An archive gap is a range of missing archived redo logs created whenever the standby system is unable to receive the next archived redo log generated by the primary  db.

For example, an archive gap occurs when the network becomes unavailable and automatic archiving from the primary database to the standby database stops. When the network is available again, automatic transmission of the redo data from the primary database to the failed standby database resumes.

Possible and common Causes for Archive Gaps are:

  • Network Disconnects or stop of Log Transport Services
  • Outages of the Standby Database
  • Misconfigurations of Log Transport Services
  • I/O-Issues on the Standby Site
  • Manual Deletion of ArchiveLogs before they are applied to the Standby
  • Insufficient Bandwith in the Network between the Primary and Standby Site


Methods of Gap Resolution:

Data Guard provides 2 methods for gap resolution, Automatic and FAL (Fetch Archive Log). The automatic method requires no configuration while FAL requires configuration via init.ora parameters.

Automatic Gap Resolution:

In both 9.0.1 and 9.2.0 Automatic Gap Resolution is implemented during log transport processing. As the LGWR or ARCH process begins to send redo over to the standby, the sequence number of the log being archived is compared to the last sequence received by the RFS process on the standby. If the RFS process detects that the archive log being received is greater than the last sequence received plus one, then the RFS will piggyback a request to the primary to send the missing archive logs. Since the standby destination requesting the gap resolution is already defined by the LOG_ARCHIVE_DEST_n parameter on the primary, the ARCH process on the primary sends the logs to the standby and notifies the LGWR that the gaps have been resolved.

            Starting in 9.2.0, automatic gap resolution has been enhanced. In addition to the above, the ARCH process on the primary database polls all standby databases every minute to see if there is a gap in the sequence of archived redo logs. If a gap is detected then the ARCH process sends the missing archived redo log files to the standby databases that reported the gap. Once the gap is resolved, the LGWR process is notified that the site is up to date.

FAL Gap Resolution:

As the RFS process on the standby receives an archived log, it updates the standby controlfile with the name and location of the file. Once the MRP (Media Recovery Process) sees the update to the controlfile, it attempts to recover that file. If the MRP process finds that the archived log is missing or is corrupt, FAL is called to resolve the gap or obtain a new copy. Since MRP has no direct communications link with the primary, it must use the FAL_SERVER and FAL_CLIENT initialization parameters to resolve the gap.Both of these parameters must be set in the standby init.ora. The 2 parameters are defined as:

FAL_SERVER:  An OracleNet service name that exist in the standby tnsnames.ora file that points to the primary db listener. The FAL_SERVER parameter can contain a comma delimited list of locations that should be attempted during gap resolution.

FAL_CLIENT:   An OracleNet service name that exist in the primary tnsnames.ora file that points to the standby database listener. The value of FAL_CLIENT should also be listed as the service in a remote archive destination pointing to the standby.

Once MRP needs to resolve a gap it uses the value from FAL_SERVER to call the primary database. Once communication with the primary has been established, MRP passes the FAL_CLIENT value to the primary ARCH process. The primary ARCH process locates the remote archive destination with the corresponding service name and ships the missing archived redo logs. If the first destination listed in FAL_SERVER is unable to resolve the gap then the next destination is attempted until either the gap is resolved or all FAL_SERVER destination have been tried.

As of 9.2.0 FAL Gap Resolution only works with Physical Standby databases as the process is tied to MRP. Gap recovery on a logical standby database is  handled through the heartbeat mechanism.

Simulating Gap Recovery

The follow steps can be used to illustrate and verify both automatic and FAL gap recovery. As the steps involve shutting down the standby database, which can impact disaster recovery, it is recommended to perform these procedures in a test environment.

Automatic Gap Resolution:

1. Shutdown the physical standby database.
2. Determine the current sequence on the primary database.
3. Perform at least three log switches on the primary database.
4. Verify that the logs did not get transferred to the standby archive dest.
5. Start the standby database.
6. Perform a log switch on the primary and verify that the gap gets resolved on the standby.

FAL Gap Resolution:

1. In the standby init.ora define the fal_server and fal_client parameters.
2. Bounce the standby database so that the parameters are put into effect.
3. Perform three log switches on the primary database.
4. In the standby_archive_dest directory delete the middle archive log on the standby.
5. Start managed recovery and verify that the gap is resolved by FAL_SERVER and FAL_CLIENT.

Manually Resolving a Gap:

In some rare cases it might be necessary to manually resolve gaps. The following section describes how to query the appropriate views to determine if a gap exists.

On your physical standby database:

Query the V$ARCHIVE_GAP view:

SQL> SELECT * FROM V$ARCHIVE_GAP;
THREAD#   LOW_SEQUENCE#  HIGH_SEQUENCE#
-----------             -------------       --------------
1                        443                    446

The query results show that your physical standby database is currently missing logs from sequence 443 to sequence 446 for thread 1. After you identify the gap, issue the following SQL statement on the primary database to locate the archived redo logs on your primary database:

SQL> SELECT NAME FROM V$ARCHIVED_LOG WHERE THREAD#=1 AND DEST_ID=1  AND  SEQUENCE# BETWEEN 443 AND 446;

NAME
-------------------------------------------------------
/u01/oradata/arch/arch_1_443.arc
/u01/oradata/arch/arch_1_444.arc
/u01/oradata/arch/arch_1_445.arc

Copy the logs returned by the query to your physical standby database and
register using the ALTER DATABASE REGISTER LOGFILE command.

SQL> ALTER DATABASE REGISTERLOGFILE    '/u01/oradata/stby/arch/arch_1_443.arc';
SQL> ALTER DATABASE REGISTER LOGFILE    '/u01/oradata/stby/arch/arch_1_444.arc';
SQL> ALTER DATABASE REGISTER LOGFILE    '/u01/oradata/stby/arch/arch_1_445.arc';

Once the log files have been registered in the standby controlfile, you can restart the MRP process.

On a logical standby database:
Query the DBA_LOGSTDBY_LOG view.

SQL> SELECT THREAD#, SEQUENCE#, FILE_NAME FROM DBA_LOGSTDBY_LOG L    WHERE NEXT_CHANGE# NOT IN  (SELECT FIRST_CHANGE# FROM DBA_LOGSTDBY_LOG WHERE L.THREAD# =THREAD#)  ORDER BY THREAD#,SEQUENCE#;

THREAD#          SEQUENCE#             FILE_NAME
----------                    ----------             -----------------------------------------------
1                         451                   /u01/oradata/logical_stby/arch/arch_1_451.arc
1                         453                   /u01/oradata/logical_stby/arch/arch_1_453.arc

Copy the missing logs to the logical standby system and register them using the ALTER DATABASE REGISTER LOGICAL LOGFILE statement on your logical standby db.

SQL> ALTER DATABASE REGISTER LOGICAL LOGFILE /u01/oradata/logical_stby/arch/arch_1_452.arc;

After you register these logs on the logical standby database, you can restart log apply services.


Ref:
https://sysdbagurus.wordpress.com/2017/04/03/data-guard-gap-detection-and-resolution/
Data Guard Gap Detection and Resolution Possibilities (Doc ID 1537316.1)


Apply CRS Patch to CRS_HOME only or RDBMS_HOME only


Goal

The Clusterware patches for 10g and 11g contain binaries that can be installed in the Clusterware home (CRS_HOME), and the database home (RDBMS_HOME). Due to normal upgrading of the Oracle software in a RAC environment, customers may want to maintain multiple Oracle homes of different versions. Under some situation, customers may need to apply a CRS patch to either CRS_HOME or RDBMS_HOME.

The README.txt that comes with a CRS patch generally assumes customers apply the patch to both CRS_HOME and RDBMS_HOME at the same time. This note gives instructions on what to do if customers need to apply a CRS patch to only CRS_HOME or only RDBMS_HOME.

Customers may need to do this when:

1). Customers have a mixed CRS and RDBMS version.
2). Customers need to apply a CRS patch to a single ASM instance environment, or single instance Exadata ennvironment. For example, patch ocssd.bin on single ASM instance environment; patch diskmon.bin on a single instance Exadata environment.

Solution

1. Apply a CRS patch to CRS_HOME but not to RDBMS_HOME.

Under this scenario, since CRS patch is not applied to RDBMS_HOME, one can skip the steps in README.txt related to RDBMS_HOME. An outline of the steps follows.
1). Verify that the Oracle Inventory is properly configured.
2). Unzip the PSE container file
3). Shutdown the RDBMS and ASM instances, listeners and nodeapps on all nodes before shutting down the CRS daemons.
4). Invoke the custom/scripts/prerootpatch.sh as root to unlock protected files.
5). Invoke custom/scripts/prepatch.sh -crshome <CRS_HOME>
     Note: Do not run custom/server/7<patch#>/custom/scripts/prepatch.sh -dbhome <RDBMS_HOME>
6). Patch the CRS_HOME files
     opatch napply -local -oh <CRS_HOME> -id <patch#>
     Note: Do not run opatch against RDBMS_HOME.
7). Configure CRS_HOME
     custom/scripts/postpatch.sh -crshome <CRS_HOME>
     Note: Do not run custom/server/<patch#>/custom/scripts/postpatch.sh -dbhome <RDBMS_HOME>
8). Invoke custom/scripts/postrootpatch.sh -crshome <CRS_HOME> as root
9). Determine whether the patch has been installed on CRS_HOME:
     opatch lsinventory -detail -oh <CRS_HOME>
     Note: Since the patch is not installed on RDBMS_HOME, opatch lsinventory -oh <RDBMS_HOME> will not show the patch#.

2. Apply CRS patch on RDBMS_HOME but not CRS_HOME.

Under this scenario, since CRS patch is not applied to CRS_HOME, but it is applied to RDBMS_HOME, one can skip the steps in README.txt related to CRS_HOME.
1). Verify that the Oracle Inventory is properly configured.
2). Unzip the PSE container file
3). Shutdown the RAC/ASM instances, listeners, nodeapps that runs from RDBMS_HOME. Since we are not patching CRS_HOME, CRS daemons do not need to be down.
4). Skip:  custom/scripts/prerootpatch.sh -crshome <CRS_HOME> -crsuser <username>
5). Skip: custom/scripts/prepatch.sh -crshome <CRS_HOME>
     Run: custom/server/<patch#>/custom/scripts/prepatch.sh -dbhome <RDBMS_HOME>
6). Patch the Files
     Skip 6.1 Patch the CRS home files:
                 opatch napply -local -oh <CRS_HOME> -id <patch#>
     Run: 6.2 Patch the RDBMS home files:
                 opatch napply custom/server/ -local -oh <RDBMS_HOME> -id <patch#>
7). Configure the HOME
     Skip: 7.1 Configure the CRS HOME
             custom/scripts/postpatch.sh -crshome <CRS_HOME>
     Run: 7.2 Configure the RDBMS HOME
             custom/server/<patch#>/custom/scripts/postpatch.sh -dbhome <RDBMS_HOME>
8). Skip: custom/scripts/postrootpatch.sh -crshome <CRS_HOME>
9). Verify patch is installed on RDBMS_HOME:
      opatch lsinventory -detail -oh <RDBMS_HOME>
10). Start nodeapps, listeners, ASM/RAC instances that were shutdown before the patching

create job using new 10g scheduling feature


Oracle 10g intorduce a comprehensive scheduler (DBMS_SCHEDULER) to extend the functionality provided by the DBMS_JOB package. Jobs can either be made up of predefined parts (programs and schedules) or completely self contained depending on which overload of the CREATE_JOB procedure is used to create them. In order to crate job,schedule or program owner require CREATE JOB privilege to be granted.

Programs:
-----------------
The scheduler allows you to optionally create programs which hold metadata about a task, but no schedule information. A program may related to a PL/SQL block, a stored procedure or an OS executable file. Programs are created using the CREATE_PROGRAM procedure:

-- Create the test program.

BEGIN
-- PL/SQL Block.
DBMS_SCHEDULER.create_program (
program_name => 'test_plsql_block_program',
program_type => 'PLSQL_BLOCK',
program_action => 'BEGIN DBMS_STATS.gather_schema_stats(''KUNAL''); END;',
enabled => TRUE,
comments => 'Program to gather JOHN''s statistics using a PL/SQL block.');

DBMS_SCHEDULER.enable (name => 'test_plsql_block_program');
END;
/

-- Display the program details.

SELECT program_name,program_type,enabled FROM user_scheduler_programs;

PROGRAM_NAME PROGRAM_TYPE ENABLE
------------------------------ ---------------- -----
TEST_PLSQL_BLOCK_PROGRAM PLSQL_BLOCK TRUE

Programs can be deleted using the DROP_PROGRAM procedure:

BEGIN
DBMS_SCHEDULER.drop_program (program_name => 'test_plsql_block_program');
END;
/

Schedules:
------------------

The scheduler allows you to optionally create Schedules. Schedules define the start time, end time and interval related to a job. Schedules are created using the CREATE_SCHEDULE procedure:

-- Create the schedule

BEGIN
DBMS_SCHEDULER.create_schedule (
schedule_name => 'test_schedule',
start_date => SYSTIMESTAMP,
repeat_interval => 'freq=hourly; byminute=0',
end_date => NULL,
comments => 'Repeats hourly, on the hour, for ever.');
END;
/

-- Display the schedule details.

SELECT schedule_name,start_date FROM user_scheduler_schedules;

SCHEDULE_NAME START_DATE
-------------------------- --------------------------
TEST_SCHEDULE 27-APR-04 11.32.33.604343 AM -05:00


A schedule can be dropped using the DROP_SCHEDULE procedure:

BEGIN
DBMS_SCHEDULER.drop_schedule (schedule_name => 'TEST_SCHEDULE');
END;
/

PL/SQL procedure successfully completed.


Jobs:
-------
Jobs are the main part of the scheduler. They can either be made up of predefined parts (programs and schedules) or completely self contained depending on which overload of the CREATE_JOB procedure is used to create them:

-- Create jobs.

-- Job defined entirely by the CREATE JOB procedure.

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'test_self_contained_job',
job_type => 'PLSQL_BLOCK',
job_action => 'BEGIN DBMS_STATS.gather_schema_stats(''JOHN''); END;',
start_date => SYSTIMESTAMP,
repeat_interval => 'freq=hourly; byminute=0',
end_date => NULL,
enabled => TRUE,
comments => 'Job created using the CREATE JOB procedure.');
End;
/

-- Job defined by an existing program and schedule.

BEGIN
DBMS_SCHEDULER.create_job (
job_name => 'test_program_schedule_job',
program_name => 'test_plsql_block_program',
schedule_name => 'TEST_SCHEDULE',
enabled => TRUE,
comments => 'Job created using an existing program and schedule.');
End;
/

-- Display job details.

SELECT job_name, enabled FROM user_scheduler_jobs;

JOB_NAME ENABL
------------------------------ -----
TEST_PROGRAM_SCHEDULE_JOB TRUE
TEST_SELF_CONTAINED_JOB TRUE

Jobs are normally run under the control of the job coordinator, but they can be controlled manually using the RUN_JOB and STOP_JOB procedures:

BEGIN
DBMS_SCHEDULER.run_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB',
use_current_session => FALSE);
END;
/

BEGIN
DBMS_SCHEDULER.stop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB');
END;
/

Jobs can be deleted using the DROP_JOB procedure:

BEGIN
DBMS_SCHEDULER.drop_job (job_name => 'TEST_PROGRAM_SCHEDULE_JOB');
DBMS_SCHEDULER.drop_job (job_name => 'test_self_contained_job');
END;
/


Ref: oracle-base.com

Wednesday, July 13, 2011

CURSOR_SHARING parameter



CURSOR_SHARING is an init.ora parameter which decides whether a SQL send from user is a candidate for fresh parsing or will use an existing plan.

This parameter has 3 values.

1. CURSOR_SHARING = Exact (Default)

Definition: Share the plan only if text of SQL matches exactly with the text of SQL lying in shared pool

Let’s take an example


SQL> create table test1 (t1 number);
Table created.

SQL> insert into test1 values(1);
1 row created.

SQL> insert into test1 values(2);
1 row created.

SQL> commit;
Commit complete.

SQL> select * from test1 where t1=1;

T1
----------
1

SQL> select * from test1 where t1=2;

T1
----------
2

SQL> select sql_text from v$sql  where sql_text like 'select * from test1%'  order by sql_text;

SQL_TEXT
-----------------------------------------------------
select * from test1 where t1=1
select * from test1 where t1=2


As you see there were 2 statements in V$sql, so it generated 2 plans. Oracle had to do the same work again to generate the plan even when the difference between the two SQL was just literal value.


2. CURSOR_SHARING = Force (Introduced in 8.1.6)

Definition: Share the plan (forcibly) of a SQL if the text of SQL matches (except literal values) with text of SQL in shared pool

This means if 2 SQL’s are same except their literal values, share the plan.

Let’s take an example:  
I’m using the same table and data which is used in case of above example.

SQL> alter system flush shared_pool;
System altered.

SQL> alter session set cursor_sharing=force;
Session altered.

SQL> select * from test1 where t1=1;

T1
----------
1

SQL> select * from test1 where t1=
2;

T1
----------
2

SQL> select sql_text from v$sql where sql_text like 'select * from test1%'   order by sql_text;

SQL_TEXT
---------------------------------------------------
select * from test1 where t1=:"SYS_B_0"


You can see for both the statements there was only one entry in V$SQL This means for second occurrence, oracle did not generate a new plan.

This not only helps in savings DB server engine time for generating the plan but also helps in reducing the number of plans shared pool can hold.

Important note:

Cursor_sharing = force can have some flip behavior as well, so you must be careful to use this. Using this we are forcing oracle to use the same plan for 2(or more) SQL’s even when using the same plan may not be good for similar SQL’s.

Example: “where t1=2” may be a good candidate for index scan while “where t1=10” should use a full table scan because 90% of the rows in the table has t1=10 (assumption).



3. CURSOR_SHARING = SIMILAR (Introduced in 9i)


This is the tricky one, but most used.

Definition: SIMILAR causes statements that may differ in some literals, but are otherwise identical, to share a cursor, unless the literals affect either the meaning of the statement or the degree to which the plan is optimized. (Source: Oracle documentation)

Let’s understand this.
Re-quoting the example above > “where t1=2” may be a good candidate for index scan while “where t1=10” should use a full table scan because 90% of the rows in the table has t1=10 (assumption).

To avoid 2 statements using the same plan when the same plan is not good for one of them, we have cursor_sharing=similar

Let’s take an example:

SQL> alter system flush shared_pool;
System altered.

SQL> drop table test1;
Table dropped.

SQL> create table test1 (t1 number,t2 number);
Table created.


SQL>  begin
2 for i in 1 .. 100 loop
3 insert into test1 values(1,i);
4 end loop;
5 commit;
6 update test1 set t1=2 where rownum <> /

PL/SQL procedure successfully completed.

In this case t1 has value “2” in first row and “1” in rest 99 rows
SQL> create index tt_indx on test1(t1);
Index created.

SQL> alter session set cursor_sharing=similar;
Session altered.

SQL> select * from test1 where t1=2;
1 row selected.

SQL> select * from test1 where t1=1;
99 rows selected.

SQL> select sql_text from v$sql  where sql_text like 'select * from test1%' order by sql_text;

SQL_TEXT
----------------------------------------------------
select * from test1 where t1=:"SYS_B_0"
select * from test1 where t1=:"SYS_B_0"

This tells us that even though the 2 statements were similar, Oracle opted for a different plan. Now even if you put t1=30 (0 rows), Oracle will create another plan.
SQL> select * from test1 where t1=30; -- (0 rows)

SQL> select sql_text from v$sql where sql_text like 'select * from test1%' order by sql_text;

SQL_TEXT
---------------------------------------------------
select * from test1 where t1=:"SYS_B_0"
select * from test1 where t1=:"SYS_B_0"
select * from test1 where t1=:"SYS_B_0"

This is because the first time when the SQL ran, oracle engine found the literal value as “unsafe” because using the same literal value can cause bad plans for other similar SQL’s. So along with the PLAN, optimizer stored the literal value also. 
This will ensure the reusability of the plan only in case the same lieteral is provided. In case of any change, optimizer will generate a new plan.

But this doesn’t mean that SIMILAR and EXACT are same.

SQL> alter system flush shared_pool;
System altered.

SQL> select * from test1 where t1=2 and t1=22;
no rows selected

SQL> select * from test1 where t1=2 and t1=23;
no rows selected

SQL> select sql_text from v$sql where sql_text like 'select * from test1%' order by sql_text;

SQL_TEXT
--------------------------------------------------------------
select * from test1 where t1=:"SYS_B_0" and t1=:"SYS_B_1"

Optimizer used single plan for both.

Conclusions:

1. Use CURSOR_SHARING=similar only when you have library cache misses and/or most of the SQL statements differ only in literal values
2. CURSOR_SHARING=force/similar significantly reduces the number of plans in shared pool

Note:

1. Oracle does not recommend setting CURSOR_SHARING to FORCE in a DSS environment or if you are using complex queries. Also, star transformation is not supported with CURSOR_SHARING set to either SIMILAR or FORCE

2. Setting CURSOR_SHARING to SIMILAR or FORCE causes an increase in the maximum lengths (as returned by DESCRIBE) of any selected expressions that contain literals (in a SELECT statement). However, the actual length of the data returned does not change.

Auto Scroll Stop Scroll