Document for Installing
oracle database 11.2.0.3 on Active/Passive linux clustering
1. Pre-requisites OS
installtion with cluster ready.
2. RPMS installed for Oracle
database installtion on linux.
Preparing OS user
create oracle and grid user
and assign it groups as mentioned below,
OS Group Name
|
OS Username
|
oinstall
|
grid,oracle
|
asmdba
|
grid,oracle
|
asmadmin
|
grid
|
asmoper
|
grid
|
dba
|
oracle
|
oper
|
oracle
|
Note: Primary group is oinstall
for oracle,grid
Change owner and group for
Oracle base
chown -R grid:oinstall /Grid
chown -R oracle:oinstall /oracle
mkdir -p /Grid11gdb
mkdir -p /Ora11gdb
Note: Create and change the
permission on other node also.
Change the profiles as per
the environment in both user and on both nodes
[grid@MMISAPP02 ~]$ cat
.bash_profile
# .bash_profile
# Get the aliases and
functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment
and startup programs
PATH=$PATH:$HOME/bin:/Grid/Grid11gdb/bin
export PATH
ORACLE_HOME=/Grid/Grid11gdb
export ORACLE_HOME
ORACLE_SID=+ASM
export ORACLE_SID
[grid@MMISAPP02 ~]$
[oracle@MMISAPP02 ~]$ cat
.bash_profile
# .bash_profile
# Get the aliases and
functions
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
# User specific environment
and startup programs
PATH=$PATH:$HOME/bin:/oracle/Ora11gdb/bin
export PATH
ORACLE_HOME=/oracle/Ora11gdb
export ORACLE_HOME
ORACLE_SID=testdb1
export ORACLE_SID
[oracle@MMISAPP02 ~]$
Preparing the Diskgroups
Installing and configuring
ASMLIB for specific OS kernel.
Installing RPMS for ASM
rpm -Uvh oracleasmlib-2.0.4-1.el5.x86_64.rpm
rpm -Uvh oracleasm-support-2.1.7-1.el5.x86_64.rpm
rpm -Uvh oracleasm-2.6.18-308.el5-2.0.5-1.el5.x86_64.rpm
rpm -Uvh oracleasm-2.6.18-308.el5debug-2.0.5-1.el5.x86_64.rpm
rpm -Uvh oracleasm-2.6.18-308.el5-debuginfo-2.0.5-1.el5.x86_64.rpm
rpm -Uvh oracleasm-2.6.18-308.el5xen-2.0.5-1.el5.x86_64.rpm
Note: Ignore if any of the two
rpms fails.
Configure ASMLIB
Enter the following command
to run the oracleasm initialization script with the configure option:
[root@racnode1 ~]# /usr/sbin/oracleasm
configure -i
Configuring the Oracle ASM
library driver.
This will configure the
on-boot properties of the Oracle ASM library
driver. The following questions will determine
whether the driver is
loaded on boot and what
permissions it will have. The current
values
will be shown in brackets
('[]'). Hitting without
typing an
answer will keep that
current value. Ctrl-C will abort.
Default user to own the
driver interface []: grid
Default group to own the
driver interface []: asmadmin
Start Oracle ASM library
driver on boot (y/n) [n]: y
Scan for Oracle ASM disks on
boot (y/n) [y]: y
Writing Oracle ASM library
driver configuration: done
The script completes the
following tasks:
Creates the
/etc/sysconfig/oracleasm configuration file
Creates the /dev/oracleasm
mount point
Mounts the ASMLib driver
file system
Enter the following command
to load the oracleasm kernel module:
[root@racnode1 ~]# /usr/sbin/oracleasm init
Creating /dev/oracleasm
mount point: /dev/oracleasm
Loading module
"oracleasm": oracleasm
Mounting ASMlib driver
filesystem: /dev/oracleasm
Note: Repeat this procedure on
other node.
Creating the Disk group
Deleting the disks, (if
required)
oracleasm deletedisk NODE1DG
oracleasm deletedisk NODE2DG
oracleasm deletedisk
ASMDATA1
oracleasm deletedisk
ASMDATA2
node1
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
node2
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
Creating NODE1DG disks
oracleasm createdisk NODE1DG
/dev/mpath/3600a0b8000130c87000038b0502c4e06
node1
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
node2
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
Note: Create the disk NODE2DG
after grid installation on node1.
Creating NODE2DG disks
oracleasm createdisk NODE2DG
/dev/mpath/3600a0b800018f1fb00005a2a502c5719
node1
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
node2
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
Note: Create the disk ASMDATA1
and ASMDATA2 after grid installation on node2.
Creating ASMDATA disks
oracleasm createdisk
ASMDATA1 /dev/mpath/3600a0b800018f1fb00005a21502c341f
oracleasm createdisk
ASMDATA2 /dev/mpath/3600a0b800018f1fb00005a1f502c33f5
node1
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
node2
/etc/init.d/oracleasm stop
/etc/init.d/oracleasm start
NODE1DG
ASMDATA1
ASMDATA2
[root@MMISAPP01 Grid11gdb]#
/etc/init.d/oracleasm stop
Dropping Oracle ASMLib
disks:
[ OK ]
Shutting down the Oracle
ASMLib driver:
[FAILED] --ignore this
[root@MMISAPP01 Grid11gdb]#
/etc/init.d/oracleasm start
Initializing the Oracle
ASMLib driver: [ OK ]
Scanning the system for
Oracle ASMLib disks: [ OK ]
[root@MMISAPP01 Grid11gdb]# oracleasm
listdisks
ASMDATA1
ASMDATA2
NODE1DG
NODE2DG
Note: Use ASMCA to create the
diskgroup.
Install and configure grid
infrastructure for standalone with NODE1DG
Check the status after
completion of installation
[root@MMISAPP01 bin]#
./crsctl check has
CRS-4638: Oracle High
Availability Services is online
[root@MMISAPP01 bin]# ./crsctl
check css
CRS-4529: Cluster
Synchronization Services is online
[root@MMISAPP01 bin]#
[root@MMISAPP01 bin]# ./crsctl
stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE
SERVER
STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE
mmisapp01
ora.NODE1DG.dg
ONLINE ONLINE
mmisapp01
ora.asm
ONLINE ONLINE
mmisapp01
Started
ora.ons
OFFLINE OFFLINE mmisapp01
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1
ONLINE ONLINE mmisapp01
ora.diskmon
1
OFFLINE OFFLINE
ora.evmd
1
ONLINE ONLINE mmisapp01
--------------------------
Note : After installtion on node1,
shut down the services on node 1
Start the install and
configure grid infrastructure for standalone on node 2 with node2dg
Adding the diskgroups to
Node2 only
*************************************************
(eg.) alter system set
asm_diskstring= <'ORCL:*' or '/dev/rdsk/*' or '/dev/emcpower*', etc..>
scope=both;
SQL> alter system set
asm_diskstring= '/dev/oracleasm/disks/*' scope=both;
alter system set
asm_diskstring= '/dev/oracleasm/disks/*' scope=both
*
ERROR at line 1:
ORA-02097: parameter cannot
be modified because specified value is invalid
ORA-15014: path
'ORCL:NODE2DG' is not in the discovery set
alter system set
asm_diskstring= 'ORCL:ASMDATA1','ORCL:ASMDATA2','ORCL:NODE2DG' scope=both;
Note: Do
not include NODE1DG
*************************************************
(eg.) alter system set
asm_diskgroups= 'NODE2DG', 'DATA', 'RECO', 'BACKUP' scope=both;
alter system set
asm_diskgroups= 'NODE2DG', 'DATA' scope=both;
*************************************************
SQL> alter diskgroup
'DATA' mount;
alter diskgroup 'DATA' mount
*
ERROR at line 1:
ORA-15100: invalid or
missing diskgroup name
*************************************************
SQL> alter diskgroup DATA
mount;
alter diskgroup DATA mount
*
ERROR at line 1:
ORA-15032: not all
alterations performed
ORA-15260: permission denied on ASM disk group
*************************************************
SQL> conn sys as sysasm --sysasm is new in 11g
Enter password:
Connected.
SQL> alter diskgroup DATA mount;
Diskgroup altered.
************************************************
Stop services on node2
(crsctl stop has)
Start services on node1
(crsctl start has)
Proceed with database
software installation on node1
Start the creation of
database using dbca
The testdb was created for DWH, hence for increasing the db_block_size to 32k we used custom db template.
1. Use custom DB template.
2. 32k Block size.
3. Characterset should be
AL32UTF8.
Stop services on node1
(crsctl stop has)
Start services on node2
(crsctl start has)
Proceed with database
software installation on node2
Steps to start database on
node2
1. Copy pfile,spfile,pwdfile
from node1
2. List of directories to be
created
mkdir -p
/oracle/testdb1/testdb1
mkdir -p
/oracle/testdb1/testdb1/alert
mkdir -p
/oracle/testdb1/testdb1/cdump
mkdir -p
/oracle/testdb1/testdb1/hm
mkdir -p
/oracle/testdb1/testdb1/incident
mkdir -p
/oracle/testdb1/testdb1/incpkg
mkdir -p /oracle/testdb1/testdb1/ir
mkdir -p
/oracle/testdb1/testdb1/lck
mkdir -p
/oracle/testdb1/testdb1/metadata
mkdir -p
/oracle/testdb1/testdb1/metadata_dgif
mkdir -p
/oracle/testdb1/testdb1/metadata_pv
mkdir -p
/oracle/testdb1/testdb1/stage
mkdir -p
/oracle/testdb1/testdb1/sweep
mkdir -p
/oracle/testdb1/testdb1/trace
mkdir -p
/oracle/admin/testdb1/adump
Example:
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/alert
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/cdump
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/hm
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/incident
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/incpkg
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/ir
mkdir -p /u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/lck
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/metadata
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/metadata_dgif
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/metadata_pv
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/stage
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/sweep
mkdir -p
/u01/app/oracle/diag/rdbms/db11gr2/DB11GR2/trace
mkdir -p
/u01/app/oracle/admin/DB11GR2/adump
3. Connect to node #2 (as
grid user) and start the OHAS services:
crsctl start has
crsctl stat res -t
4. Adding DB as a resource
to OHAS
Connect as oracle OS user
(in node #2) and add the database service resource into the SIHA/OHAS/Restart
stack, very important, you must use the srvctl
command located at the >/bin
(do NOT use the one located at the Grid Infrastructure Oracle Home), as follow:
$> /bin/srvctl add database -d DB11202 -n DB11202 -o
/u01/app/oracle/product/11.2.0/dbhome_1 -p +DATA/DB11202/spfileDB11202.ora -s
OPEN -y AUTOMATIC -a DATA,RECO -t IMMEDIATE
/oracle/Ora11gdb/bin/srvctl
add database -d testdb1 -n testdb1 -o /oracle/Ora11gdb -p
+DATA/testdb1/spfiletestdb1.ora -s OPEN -y AUTOMATIC -a DATA -t IMMEDIATE
Example:
[oracle@MMISAPP02~]$ echo
$ORACLE_HOME
/oracle/Ora11gdb/
[oracle@MMISAPP02~]$ srvctl
add database -d DB11202 -n DB11202 -o /oracle/Ora11gdb -p
+DATA/DB11202/spfileDB11202.ora -s OPEN -y AUTOMATIC -a DATA,RECO -t IMMEDIATE
5. Verifying the config of
the database
verify the new setting
associated with your database resource (connected as oracle OS user) from node
#2:
$> /bin/srvctl config database -d testdb1
Your database should look a
similar configuration like this, example:
[oracle@MMISAPP02~]$ srvctl
config database -d testdb1
Database unique name:
testdb1
Database name: testdb1
Oracle home:
/oracle/Ora11gdb
Oracle user: grid
Spfile:
+DATA/testdb1/spfiletestdb1.ora
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Disk Groups: DATA,RECO
Services:
[oracle@MMISAPP02~]$
6. Startup the database on
node2
Manually, startup the
database instances & open the databases for the first time (thru SQL*Plus), this is 100% required the first
time to set the STATE=ONLINE flag of each associated database resource, so the
next time OHAS is restarted (on node #2), it will automatically open the
database:
[oracle@MMISAPP02~]$
sqlplus /as sysdba
SQL*Plus: Release 11.2.0.2.0
Production on Fri Feb 24 20:05:44 2012
Copyright (c) 1982, 2010,
Oracle. All rights reserved.
Connected to an idle
instance.
SQL> startup
ORACLE instance started.
Total System Global Area
1221992448 bytes
Fixed Size 1343832 bytes
Variable Size 922750632
bytes
Database Buffers 285212672
bytes
Redo Buffers 12685312 bytes
Database mounted.
Database opened.
SQL> show parameter
spfile
NAME TYPE VALUE
------------------------------------
----------- ------------------------------
spfile string
+DATA/testdb1/spfiletestdb1.ora
SQL> exit
Disconnected from Oracle
Database 11g Enterprise Edition Release 11.2.0.2.0 - Production
With the Partitioning,
Automatic Storage Management, OLAP, Data Mining
and Real Application Testing
options
Change the LISTENER to point
to cluster IP
Note: Configure the listener with
6001 port
*********************************************************************************
# listener.ora Network
Configuration File: /Grid/Grid11gdb/network/admin/listener.ora
# Generated by Oracle
configuration tools.
LISTENER =
(DESCRIPTION_LIST =
(DESCRIPTION =
(ADDRESS = (PROTOCOL = IPC)(KEY =
EXTPROC1521))
(ADDRESS = (PROTOCOL = TCP)(HOST = 192.168.1.40)(PORT = 1521))
)
)
ADR_BASE_LISTENER = /Grid
ENABLE_GLOBAL_DYNAMIC_ENDPOINT_LISTENER=ON # line added by Agent
~
*********************************************************************************
Restart the listener and
check the status
[grid@MMISAPP02 ~]$ lsnrctl
status
LSNRCTL for Linux: Version
11.2.0.3.0 - Production on 21-AUG-2012 12:57:32
Copyright (c) 1991, 2011,
Oracle. All rights reserved.
Connecting to
(DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
STATUS of the LISTENER
------------------------
Alias LISTENER
Version TNSLSNR for Linux: Version
11.2.0.3.0 - Production
Start Date 21-AUG-2012 12:53:30
Uptime 0 days 0 hr. 4 min. 1 sec
Trace Level off
Security ON: Local OS Authentication
SNMP OFF
Listener Parameter File /Grid/Grid11gdb/network/admin/listener.ora
Listener Log File
/Grid/diag/tnslsnr/MMISAPP02/listener/alert/log.xml
Listening Endpoints
Summary...
(DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.40)(PORT=1521)))
(DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=192.168.1.20)(PORT=1521)))
Services Summary...
Service "+ASM" has
1 instance(s).
Instance "+ASM", status READY, has
1 handler(s) for this service...
Service "testdb1"
has 1 instance(s).
Instance "testdb1", status READY,
has 1 handler(s) for this service...
Service
"testdb1XDB" has 1 instance(s).
Instance "testdb1", status READY,
has 1 handler(s) for this service...
The command completed
successfully
Note: Similarly changes has to be
done in other node(s) also.
Connect to database via
sqlplus and check
[grid@MMISAPP01 bin]$
su - oracle
Password:
[oracle@MMISAPP01 ~]$
sqlplus system/system@testdb1
SQL*Plus: Release 11.2.0.3.0
Production on Sat Aug 18 17:17:33 2012
Copyright (c) 1982, 2011,
Oracle. All rights reserved.
Connected to:
Oracle Database 11g
Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning,
Automatic Storage Management, OLAP, Data Mining
and Real Application Testing
options
SQL> select
name,open_mode from v$database;
NAME OPEN_MODE
---------
--------------------
TESTDB1 READ WRITE
SQL>
Starting/Stopping and Checking
the status of DB components
[grid@MMISAPP01 bin]$
./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE
SERVER
STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE
mmisapp01
ora.LISTENER.lsnr
OFFLINE OFFLINE mmisapp01
ora.asm
ONLINE ONLINE
mmisapp01 Started
ora.ons
OFFLINE OFFLINE mmisapp01
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1
ONLINE ONLINE mmisapp01
ora.diskmon
1
OFFLINE OFFLINE
ora.evmd
1
ONLINE ONLINE mmisapp01
ora.testdb1.db
1
OFFLINE OFFLINE Instance
Shutdown
[grid@MMISAPP01 bin]$
./srvctl stop asm
PRCR-1065 : Failed to stop
resource ora.asm
CRS-2529: Unable to act on
'ora.asm' because that would require stopping or relocating 'ora.DATA.dg', but
the force option was not specified
[grid@MMISAPP01 bin]$
./crsctl stop has
CRS-2791: Starting shutdown
of Oracle High Availability Services-managed resources on 'mmisapp01'
CRS-2673: Attempting to stop
'ora.DATA.dg' on 'mmisapp01'
CRS-2677: Stop of
'ora.DATA.dg' on 'mmisapp01' succeeded
CRS-2673: Attempting to stop
'ora.asm' on 'mmisapp01'
CRS-2677: Stop of 'ora.asm'
on 'mmisapp01' succeeded
CRS-2673: Attempting to stop
'ora.cssd' on 'mmisapp01'
CRS-2677: Stop of 'ora.cssd'
on 'mmisapp01' succeeded
CRS-2673: Attempting to stop
'ora.evmd' on 'mmisapp01'
CRS-2677: Stop of 'ora.evmd'
on 'mmisapp01' succeeded
CRS-2793: Shutdown of Oracle
High Availability Services-managed resources on 'mmisapp01' has completed
CRS-4133: Oracle High
Availability Services has been stopped.
[grid@MMISAPP01 bin]$
./crsctl stat res -t
CRS-4639: Could not contact
Oracle High Availability Services
CRS-4000: Command Status
failed, or completed with errors.
[grid@MMISAPP01 bin]$
./crsctl start has
CRS-4123: Oracle High
Availability Services has been started.
[grid@MMISAPP01 bin]$
[grid@MMISAPP01 bin]$
[grid@MMISAPP01 bin]$
[grid@MMISAPP01 bin]$
./crsctl stat res -t
--------------------------------------------------------------------------------
NAME TARGET STATE
SERVER
STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATA.dg
ONLINE ONLINE
mmisapp01
ora.LISTENER.lsnr
ONLINE ONLINE
mmisapp01
ora.asm
ONLINE ONLINE
mmisapp01 Started
ora.ons
OFFLINE OFFLINE mmisapp01
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.cssd
1
ONLINE ONLINE mmisapp01
ora.diskmon
1
OFFLINE OFFLINE
ora.evmd
1
ONLINE INTERMEDIATE mmisapp01
ora.testdb1.db
1
OFFLINE OFFLINE Instance
Shutdown
[grid@MMISAPP01 bin]$
./srvctl status asm
ASM is running on mmisapp01
[grid@MMISAPP01 bin]$
./srvctl status database -d testdb1
Database is not running.
[grid@MMISAPP01 bin]$
./srvctl status listener -l LISTENER
Listener LISTENER is enabled
Listener LISTENER is running
on node(s): mmisapp01
[grid@MMISAPP01
bin]$./srvctl start database -d testdb1
Prepare and attach the
scripts to startup and shutdown of DB services
Verify the cluster status
from OS
[root@MMISAPP01 home]#
clustat
Cluster Status for MMISAPC01
@ Tue Aug 21 13:22:58 2012
Member Status: Quorate
Member Name ID Status
------ ---- ---- ------
MMISAPP01.ubicbs.co.in 1 Online, Local,
rgmanager
MMISAPP02.ubicbs.co.in 2 Online, rgmanager
Service Name Owner (Last) State
------- ---- ----- ------ -----
service:OraDB_serv MMISAPP01.ubicbs.co.in started
service:lvm_serv (MMISAPP01.ubicbs.co.in) failed
[root@MMISAPP01 home]#
Reference :
How To Setup ASM (11.2) On
An Active/Passive Cluster (Non-RAC). *** Enhanced *** [ID 1296124.1]