Tuesday, April 19, 2011

Manually Creating an ASM Instance

Here is the nice article found on ASM instance creation and i'm posting it again here

Contents
  1. Overview
  2. Configuring Oracle Cluster Synchronization Services (CSS)
  3. Creating the ASM Instance
  4. Identify RAW Devices
  5. Starting the ASM Instance
  6. Verify RAW / Logical Disk Are Discovered
  7. Creating Disk Groups
  8. Using Disk Groups
  9. Startup Scripts

Overview
Automatic Storage Management (ASM) is a new feature in Oracle10g that alleviates the DBA from having to manually manage and tune disks used by Oracle databases. ASM provides the DBA with a file system and volume manager that makes use of an Oracle instance (referred to as an ASM instance) and can be managed using either SQL or Oracle Enterprise Manager.
Only one ASM instance is required per node. The same ASM instance can manage ASM storage for all 10g databases running on the node.
When the DBA installs the Oracle10g software and creates a new database, creating an ASM instance is a snap. The DBCA provides a simple check box and an easy wizard to create an ASM instance as well as an Oracle database that makes use of the new ASM instance for ASM storage. But, what happens when the DBA is migrating to Oracle10g or didn't opt to use ASM when a 10g database was first created. The DBA will need to know how to manually create an ASM instance and that is what this article provides.

Automatic Storage Management (ASM) requires the use of Oracle Cluster Synchronization Services (CSS), and as such, CSS must be configured and running before attempting to use ASM. The CSS service is required to enable synchronization between an ASM instance and the database instances that rely on it for database file storage.
In a non-RAC environment, the Oracle Universal Installer will configure and start a single-node version of the CSS service. For Oracle Real Application Clusters (RAC) installations, the CSS service is installed with Oracle Cluster Ready Services(CRS) in a separate Oracle home directory (also called the CRS home directory). For single-node installations, the CSS service is installed in and runs from the same Oracle home as the Oracle database.

Because CSS must be running before any ASM instance or database instance starts, Oracle Universal Installer configures it to start automatically when the system starts. For Linux / UNIX platforms, the Oracle Universal Installer writes the CSS configuration tasks to the root.sh which is run by the DBA after the installation process.

With Oracle10g R1, CSS was always configured regardless of whether you chose to configure ASM or not. On the Linux / UNIX platform, CSS was installed and configured via the root.sh script. This caused a lot of problems since many did not know what this process was, and for most of them, didn't want the CSS process running since they were not using ASM.
Oracle listened carefully to the concerns (and strongly worded complaints) about the CSS process and in Oracle10g R2, will only configure this process when it is absolutely necessary. In Oracle10g R2, for example, if you don't choose to configure an ASM stand-alone instance or if you don't choose to configure a database that uses ASM storage, Oracle will not automatically configure CSS in the root.sh script.
In the case where the CSS process is not configured to run on the node (see above), you can make use of the $ORACLE_HOME/bin/localconfig script in Linux / UNIX or %ORACLE_HOME%\bin\localconfig.bat batch file in Windows. For example in Linux, run the following command as root to configure CSS outside of the root.sh script after the fact:
$ su
# $ORACLE_HOME/bin/localconfig all


The following steps can be used to create a fully functional ASM instance named +ASM. The node I am using in this example also has a regular 10g database running named TESTDB. These steps should all be carried out by the oracle UNIX user account:
  • Create Admin Directories
We start by creating the admin directories from the ORACLE_BASE. The admin directories for the existing database on this node, (TESTDB), is located at $ORACLE_BASE/admin/TESTDB. The new +ASM admin directories will be created alongside the TESTDB database:
mkdir -p $ORACLE_BASE/admin/+ASM/bdump
mkdir -p $ORACLE_BASE/admin/+ASM/cdump
mkdir -p $ORACLE_BASE/admin/+ASM/hdump
mkdir -p $ORACLE_BASE/admin/+ASM/pfile
mkdir -p $ORACLE_BASE/admin/+ASM/udump      
  • Create Instance Parameter File
In this step, we will manually create an instance parameter file for the ASM instance. This is actually an easy task as most of the parameters that are used for a normal instance are not used for an ASM instance. Note that you should be fine by accepting the default size for the database buffer cache, shared pool, and many of the other SGA memory sructures. The only exception is the large pool. I like to manually set this value to at least 12MB. In most cases, the SGA memory footprint is less then 100MB. Let's start by creating the file init.ora and placing that file in $ORACLE_BASE/admin/+ASM/pfile. The initial parameters to use for the file are:

$ORACLE_BASE/admin/+ASM/pfile/init.ora
###########################################
# Automatic Storage Management
###########################################
# _asm_allow_only_raw_disks=false
# asm_diskgroups='TESTDB_DATA1'

# Default asm_diskstring values for supported platforms:
#     Solaris (32/64 bit)   /dev/rdsk/*
#     Windows NT/XP         \\.\orcldisk*
#     Linux (32/64 bit)     /dev/raw/*
#     HPUX                  /dev/rdsk/*
#     HPUX(Tru 64)          /dev/rdisk/*
#     AIX                   /dev/rhdisk/*
# asm_diskstring=''

###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=/u01/app/oracle/admin/+ASM/bdump
core_dump_dest=/u01/app/oracle/admin/+ASM/cdump
user_dump_dest=/u01/app/oracle/admin/+ASM/udump

###########################################
# Miscellaneous
###########################################
instance_type=asm
compatible=10.1.0.4.0

###########################################
# Pools
###########################################
large_pool_size=12M

###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive



After creating the $ORACLE_BASE/admin/+ASM/pfile/init.ora file, UNIX users should create the following symbolic link:
$ ln -s $ORACLE_BASE/admin/+ASM/pfile/init.ora $ORACLE_HOME/dbs/init+ASM.ora

Before starting the ASM instance, we should identify the RAW device(s) (UNIX) or logical drives (Windows) that will be used as ASM disks. For the purpose of this article, I have four RAW devices setup on Linux:
# ls -l /dev/raw/raw[1234]
crw-rw----  1 oracle dba 162, 1 Jun  2 22:04 /dev/raw/raw1
crw-rw----  1 oracle dba 162, 2 Jun  2 22:04 /dev/raw/raw2
crw-rw----  1 oracle dba 162, 3 Jun  2 22:04 /dev/raw/raw3
crw-rw----  1 oracle dba 162, 4 Jun  2 22:04 /dev/raw/raw4


 Attention Linux Users!
This article does not use Oracle's ASMLib I/O libraries. If you plan on using Oracle's ASMLib, you will need to install and configure ASMLib, as well as mark all disks using:
/etc/init.d/oracleasm createdisk <ASM_VOLUME_NAME> <LINUX_DEV_DEVICE>

Once the instance parameter file is in place, it is time to start the ASM instance. It is important to note that an ASM instance never mounts an actual database. The ASM instance is responsible for mounting and managing disk groups.
# su - oracle
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"
 
SQL> startup
ASM instance started
 
Total System Global Area   75497472 bytes
Fixed Size                   777852 bytes
Variable Size              74719620 bytes
Database Buffers                  0 bytes
Redo Buffers                      0 bytes
ORA-15110: no diskgroups mounted
 
SQL> create spfile from pfile='/u01/app/oracle/admin/+ASM/pfile/init.ora';
 
SQL> shutdown
ASM instance shutdown
 
SQL> startup
ASM instance started

Verify RAW / Logical Disk Are Discovered
Available (candidate) disks for ASM are discovered by use of the asm_diskstring instance parameter. This parameter contains the path(s) that Oracle will use to discover (or see) these candidate disks. In most cases, you shouldn't have to set this value as the default value is set for the supported platform. The following table is a list of default values for asm_diskstring on supported platforms when the value of the instance parameter is set to NULL (the value is not set):

Operating System
Default Search String
Solaris (32/64 bit)
/dev/rdsk/*
Windows NT/XP
\\.\orcldisk*
Linux (32/64 bit)
/dev/raw/*
HP-UX
/dev/rdsk/*
HP-UX(Tru 64)
/dev/rdisk/*
AIX
/dev/rhdisk/*
For the purpose of this article, I have four RAW devices setup on Linux:
# ls -l /dev/raw/raw[1234]
crw-rw----  1 oracle dba 162, 1 Jun  2 22:04 /dev/raw/raw1
crw-rw----  1 oracle dba 162, 2 Jun  2 22:04 /dev/raw/raw2
crw-rw----  1 oracle dba 162, 3 Jun  2 22:04 /dev/raw/raw3
crw-rw----  1 oracle dba 162, 4 Jun  2 22:04 /dev/raw/raw4
I now need to determine if Oracle can find these four disks. The view V$ASM_DISK can be queried from the ASM instance to determine which disks are being used or may potentially be used as ASM disks. Note that you must log into the ASM instance with SYSDBA privileges. Here is the query that I ran from the ASM instance:
$ ORACLE_SID=+ASM; export ORACLE_SID
$ sqlplus "/ as sysdba"

SQL> SELECT group_number, disk_number, mount_status, header_status, state, path  FROM   v$asm_disk

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE    PATH
------------ ----------- ------- ------------ -------- ---------------
           0           0 CLOSED  CANDIDATE    NORMAL   /dev/raw/raw1
           0           1 CLOSED  CANDIDATE    NORMAL   /dev/raw/raw2
           0           2 CLOSED  CANDIDATE    NORMAL   /dev/raw/raw3
           0           3 CLOSED  CANDIDATE    NORMAL   /dev/raw/raw4
Note the value of zero in the GROUP_NUMBER column for all four disks. This indicates that a disk is available but hasn't yet been assigned to a disk group. The next section details the steps for creating a disk group.

In this section, I will create a new disk group named TESTDB_DATA1 and assign all four discovered disks to it. The disk group will be configured for NORMAL REDUNDANCY which results in two-way mirroring of al files within the disk group. Within the disk group, I will be configuring two failure groups, which defines two independent sets of disk that should never contain more than one copy of mirrored data (mirrored extents).
For the purpose of this article, it is assumed that /dev/raw/raw1 and /dev/raw/raw2 are on one controller while /dev/raw/raw3 and /dev/raw/raw4 are on another controller. I want the ASM disk configuration so that any data files that are written to /dev/raw/raw1 and /dev/raw/raw2 will be mirrored to /dev/raw/raw3 and /dev/raw/raw4. I want ASM to guarantee that data on /dev/raw/raw1 is never mirrored to /dev/raw/raw2 and that data on /dev/raw/raw3 is never mirrored to /dev/raw/raw4. With this type of configuration, I can loose an entire controller and still have access to all of my data. When configuring failure groups, you should put all disks that share a controller (or any resource for that matter) into their own failure group. If that resource were to fail, you would still have access to the data as ASM guarantees that no mirrored data will exist in the same failure group.
The new disk group should be created from the ASM instance using the following SQL:
SQL> CREATE DISKGROUP testdb_data1 NORMAL REDUNDANCY
  2  FAILGROUP controller1 DISK '/dev/raw/raw1', '/dev/raw/raw2'
  3  FAILGROUP controller2 DISK '/dev/raw/raw3', '/dev/raw/raw4';

Diskgroup created.

Now, let's take a look at the new disk group and disk details:
SQL> select group_number, name, total_mb, free_mb, state, type from v$asm_diskgroup;

GROUP_NUMBER NAME             TOTAL_MB    FREE_MB STATE       TYPE
------------ -------------- ---------- ---------- ----------- ------
           1 TESTDB_DATA1          388        282 MOUNTED     NORMAL

SQL> select group_number, disk_number, mount_status, header_status, state, path, failgroup  from v$asm_disk;

GROUP_NUMBER DISK_NUMBER MOUNT_S HEADER_STATU STATE    PATH            FAILGROUP
------------ ----------- ------- ------------ -------- --------------- ------------
           1           0 CACHED  MEMBER       NORMAL   /dev/raw/raw1   CONTROLLER1
           1           1 CACHED  MEMBER       NORMAL   /dev/raw/raw2   CONTROLLER1
           1           2 CACHED  MEMBER       NORMAL   /dev/raw/raw3   CONTROLLER2
           1           3 CACHED  MEMBER       NORMAL   /dev/raw/raw4   CONTROLLER2




Finally, let's start making use of the new disk group! Disk groups can be used in place of actual file names when creating database files, redo log members, control files, etc.
Let's now login to the database instance running on the node that will be making use of the new ASM instance. For this article, I have a database instance already created and running on the node named TESTDB. The database was created using the local file system for all database files, redo log members, and control files:
$ ORACLE_SID=TESTDB; export ORACLE_SID
$ sqlplus "/ as sysdba"

SQL> @dba_files_all

Tablespace Name
File Class           Filename                                                        File Size
-------------------- ---------------------------------------------------------- --------------
SYSAUX               /u05/oradata/TESTDB/datafile/o1_mf_sysaux_19cv6mwk_.dbf       241,172,480
SYSTEM               /u05/oradata/TESTDB/datafile/o1_mf_system_19cv5rmv_.dbf       471,859,200
TEMP                 /u05/oradata/TESTDB/datafile/o1_mf_temp_19cv6sy9_.tmp          24,117,248
UNDOTBS1             /u05/oradata/TESTDB/datafile/o1_mf_undotbs1_19cv6c37_.dbf     214,958,080
USERS                /u05/oradata/TESTDB/datafile/o1_mf_users_19cv72yw_.dbf          5,242,880
[ CONTROL FILE    ]  /u03/oradata/TESTDB/controlfile/o1_mf_19cv5m84_.ctl
[ CONTROL FILE    ]  /u04/oradata/TESTDB/controlfile/o1_mf_19cv5msk_.ctl
[ CONTROL FILE    ]  /u05/oradata/TESTDB/controlfile/o1_mf_19cv5n34_.ctl
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_1_19cv5n8d_.log            10,485,760
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_2_19cv5o6l_.log            10,485,760
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pdy_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nbr_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_2_19cv5oml_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pt4_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nsf_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_2_19cv5p1b_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_3_19cv5q8j_.log            10,485,760
                                                                                --------------
sum                                                                              1,051,721,728

Let's now create a new tablespace that makes use of the new disk group:
SQL> create tablespace users2 datafile '+TESTDB_DATA1' size 100m;

Tablespace created.

And that's it! The
 CREATE TABLESPACE command (above) uses a datafile named +TESTDB_DATA1. Note that the plus sign (+) in front of the name TESTDB_DATA1 indicates to Oracle that this name is a disk group name, and not an operating system file name. In this example, the TESTDB instance queries the ASM instance for a new file in that disk group and uses that file for the tablespace data. Let's take a look at that new file name:
SQL> @dba_files_all

Tablespace Name
File Class           Filename                                                        File Size
-------------------- ---------------------------------------------------------- --------------
SYSAUX               /u05/oradata/TESTDB/datafile/o1_mf_sysaux_19cv6mwk_.dbf       241,172,480
SYSTEM               /u05/oradata/TESTDB/datafile/o1_mf_system_19cv5rmv_.dbf       471,859,200
TEMP                 /u05/oradata/TESTDB/datafile/o1_mf_temp_19cv6sy9_.tmp          24,117,248
UNDOTBS1             /u05/oradata/TESTDB/datafile/o1_mf_undotbs1_19cv6c37_.dbf     214,958,080
USERS                /u05/oradata/TESTDB/datafile/o1_mf_users_19cv72yw_.dbf          5,242,880
USERS2               +TESTDB_DATA1/testdb/datafile/users2.256.560031579            104,857,600
[ CONTROL FILE    ]  /u03/oradata/TESTDB/controlfile/o1_mf_19cv5m84_.ctl
[ CONTROL FILE    ]  /u04/oradata/TESTDB/controlfile/o1_mf_19cv5msk_.ctl
[ CONTROL FILE    ]  /u05/oradata/TESTDB/controlfile/o1_mf_19cv5n34_.ctl
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_1_19cv5n8d_.log            10,485,760
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_2_19cv5o6l_.log            10,485,760
[ ONLINE REDO LOG ]  /u03/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pdy_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nbr_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_2_19cv5oml_.log            10,485,760
[ ONLINE REDO LOG ]  /u04/oradata/TESTDB/onlinelog/o1_mf_3_19cv5pt4_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_1_19cv5nsf_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_2_19cv5p1b_.log            10,485,760
[ ONLINE REDO LOG ]  /u05/oradata/TESTDB/onlinelog/o1_mf_3_19cv5q8j_.log            10,485,760
                                                                                --------------
sum                                                                              1,156,579,328

ref:  http://www.idevelopment.info/

0 comments:

Post a Comment

Auto Scroll Stop Scroll