Category Archives: RAC

Use of rconfig to modify a single instance 11g database to RAC

This article covers the conversion of a pre-existing Oracle 11g single instance database into a two node RAC. The following assumptions are made for this procedure to execute correctly:
1. Oracle 11g GRID has been installed on both RAC nodes.
2. Oracle 11g RDBMS has been installed on both RAC nodes.
3. Oracle 11g single instance database has been copied to RAC node 1 and operates correctly as a single instance database.

1. Logon to node 1 of your Oracle RAC server as the RDBMS owner.

2. Copy the file $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC_AdminManaged.xml

[oracle@mylinux1 dbhome_1]$ cp $ORACLE_HOME/assistants/rconfig/sampleXMLs/ConvertToRAC_AdminManaged.xml orcl.xml
[oracle@mylinux1 dbhome_1]$

2. The sample XML file gives detailed instruction on how to edit the file for your specific use. In this example edit the following lines:

SourceDBHome = Single instance Oracle Home directory
TargetDBHome = RAC instance Oracle Home, this could be the same as the single instance Oracle Home directory
SourceDBInfo SID= SID of the Source database.
Password = SYS password
Node name= node1 name, normally this is the machine name.
Node name= node2 name, normally this is the machine name.
InstancePrefix = prefix of your RAC database, normally this is equal to the SID of your source database.
SharedStorage type=”ASM” or “OCFS”

3. Ensure that the single instance database is up and running.

[oracle@mylinux1 dbhome_1]$ ps -ef|grep smon
oracle 18508 1 0 02:12 ? 00:00:00 ora_smon_orcl
oracle 26774 7777 0 02:46 pts/1 00:00:00 grep smon
[oracle@mylinux1 dbhome_1]$

4. Execute the rconfig command and pass your edited xml file to it.
[oracle@mylinux1 bin]$ ./rconfig ./orcl.xml
Converting Database “orcl” to Cluster Database. Target Oracle Home: /u01/app/oracle/product/11.2.0/dbhome_1. Database Role: PRIMARY.
Setting Data Files and Control Files
Adding Database Instances
Adding Redo Logs
Enabling threads for all Database Instances
Setting TEMP tablespace
Adding UNDO tablespaces
Adding Trace files
Setting Flash Recovery Area
Updating Oratab
Creating Password file(s)
Configuring Listeners
Configuring related CRS resources
Starting Cluster Database
< ?xml version=”1.0″ ?>

Operation Succeeded

/u01/app/oracle/product/11.2.0/dbhome_1

[oracle@mylinux1 bin]$>

5. Now the database can be confirmed in rac mode on both nodes.

[oracle@mylinux1 bin]$ ps -ef|grep smon
oracle 16544 1 0 03:04 ? 00:00:00 ora_smon_orcl1
oracle 21072 7777 0 03:07 pts/1 00:00:00 grep smon
[oracle@mylinux1 bin]$

[root@mylinux2 ~]# ps -ef|grep smon
oracle 27298 1 0 03:03 ? 00:00:00 ora_smon_orcl2
root 29474 23387 0 03:07 pts/4 00:00:00 grep smon
[root@mylinux2 ~]#

This completes configuration of single instance to Oracle 11g RAC.

Larry J. Catt, OCP
oracle@allcompute.com
www.allcompute.com

Move Oracle RAC database into Archive log mode

Moving Oracle RAC databases into and out of archive log mode requires a few more steps due to the existence of multiple instances. This procedure covers the steps required to move a non-archive log mode database into archive log mode.

1. Logon to Oracle node1 server as the oracle software owner.

2. Validate the current archive log mode of the database, by logging onto the database and issuing the [archive log list] command.


[oracle@mylinux1 ~]$ export ORACLE_SID=orcl1
[oracle@mylinux1 ~]$ sqlplus ‘/ as sysdba’

SQL*Plus: Release 11.2.0.1.0 Production on Thu Jan 20 22:57:07 2011

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> archive log list
Database log mode No Archive Mode
Automatic archival Disabled
Archive destination /u05/oradata/orcl/archive
Oldest online log sequence 66
Current log sequence 68

3. Move the node out of cluster mode by changing the parameter cluster_database equal to false and exit.

SQL> alter system set cluster_database=false scope=spfile;

System altered.

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options
[oracle@mylinux1 ~]$

4. Stop the clustered database with srvctl utility.

[oracle@mylinux1 ~]$ srvctl stop database -d orcl

5. Logon to sqlplus on node1 and startup the instance in single instance mode with mount option.

[oracle@mylinux1 ~]$ sqlplus ‘/ as sysdba’

SQL*Plus: Release 11.2.0.1.0 Production on Thu Jan 20 22:59:08 2011

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to an idle instance.

SQL> startup mount
ORACLE instance started.

Total System Global Area 367476736 bytes
Fixed Size 2213376 bytes
Variable Size 293603840 bytes
Database Buffers 67108864 bytes
Redo Buffers 4550656 bytes
Database mounted.
SQL>

6. Alter the database to archive log mode.

SQL> alter database archivelog;

Database altered.

SQL>

7. Change the parameter cluster_database back to true, shutdown the instance, and exit.

SQL> alter system set cluster_database=true scope=spfile;

System altered.

SQL> shutdown immediate
ORA-01109: database not open

Database dismounted.
ORACLE instance shut down.
SQL>

SQL> exit
Disconnected from Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

8. Use the srvctl utility to startup the clustered database.

[oracle@mylinux1 ~]$ srvctl start database -d orcl

9. Logon to sqlplus on node1 and check the archive log mode with the command [archive log list].

[oracle@mylinux1 ~]$ sqlplus ‘/ as sysdba’

SQL*Plus: Release 11.2.0.1.0 Production on Thu Jan 20 23:01:22 2011

Copyright (c) 1982, 2009, Oracle. All rights reserved.

Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.1.0 – 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining
and Real Application Testing options

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination /u05/oradata/orcl/archive
Oldest online log sequence 66
Next log sequence to archive 68
Current log sequence 68
SQL>

This completes moving an Oracle RAC database into archive log mode.

Larry J. Catt, OCP
oracle@allcompute.com
www.allcompute.com

Undo root.sh script in 11g GRID or RDBMS install

If you have executed root.sh on the installation of an Oracle 11g clusterware or RDBMS install and it failed. Remove the changes made by root.sh before re-execution with the following procedure.

1. Logon as the root user.

2. Navigate to the Oracle GRID_HOME

3. Execute the following command.

/opt/app/11.2.0/grid/crs/install/rootcrs.pl -deconfig –force -verbose

This completes the undoing the execution of root.sh from an Oracle Grid Installation.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

SSH, PING, and RSH – Permissions causing failure on Oracle RAC Installation

Oracle RAC requires permissions to networking components which are typically restricted on most systems. Most notable are permissions to utilities SSH, PING, and RSH. If you are attempting to install Oracle RAC software and you are receiving failures in communications; check the permissions of these utilities for a possible cause.

1. Logon to your Linux server as root.

2. Change the permissions on the following files to u+s.

chmod u+s /bin/ping
chmod u+s /usr/bin/ssh
chmod u+s /usr/bin/rsh

This completes allowing Oracle permissions to SSH, PING, and RSH for RAC configurations.
Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

Removing Oracle 10g RAC components from LINUX server

The following procedure is used to remove all RAC components created during the clusterware and RDBMS installation of 10g from a Linux server. NOTE: Once executed, Clusterware will not be usable.

1. Logon to the server as root.

2. Run the following command from any location on your server to remove all Oracle 10g RAC components from your OS layer. Note: RAC will not be useable on the system once executed.

rm -rf /etc/oracle
rm -rf /etc/oraInst.loc
rm -rf /etc/oratab
rm -rf /etc/init.d/*crs*
rm -rf /etc/init.d/init.cssd
rm -rf /etc/init.d/init.evmd
rm -rf /etc/rc*/*crs*
rm -rf /etc/*crs*
rm -rf /tmp/Ora*
rm -rf /tmp/.oracle


Larry J. Catt
oracle@allcompute.com
www.allcompute.com

Oracle RAC and the NTP process

If you are installing Oracle Clusterware on a Linux or UNIX machine, the following error may occur if you have not turned on Slewing:

PRVF-5439: NTP daemon does not have slewing option “-x” set on the node

The following procedure will remedy this problem, as demonstrated in a RHEL 5.4 environment.

1. Logon to your Oracle cluster node as the root user

2. Open the file /etc/sysconfig/ntpd and ensuring that the line


OPTIONS=”-u ntp:ntp -p /var/run/ntpd.pid -x”

3. If the line does not contain the –x option, edit the file so it reads as above.

4. Then stop the service as below, with the command /sbin/service ntpd stop


[root@mylinux1 ~]# /sbin/service ntpd stop
Shutting down ntpd: [ OK ]

5. Restart the service as below, with the command /sbin/service ntpd start


[root@mylinux1 ~]# /sbin/service ntpd start
ntpd: Synchronizing with time server: [ OK ]
Starting ntpd: [ OK ]
[root@mylinux1 ~]#

6. Verify that the –x option has been implemented, with the command ps –ef|grep ntp


[root@mylinux1 ~]# ps -ef|grep ntp
ntp 8301 1 0 20:33 ? 00:00:00 ntpd -u ntp:ntp -p /var/run/ntpd.pid -x
root 8792 28965 0 20:34 pts/1 00:00:00 grep ntp
[root@mylinux1 ~]#

This completes turning the NTPD slewing option on for Oracle clusteware.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

CRS-4640 and CRS-4000 during execution of root.sh

The following procedure will help in resolving error CRS-4640 and CRS-4000 during the execution of root.sh script in the installation of Oracle 11g clusteware.

ERROR:
Adding daemon to inittab
CRS-4640: Oracle High Availability Services is already active
CRS-4000: Command Start failed, or completed with errors.
ohasd failed to start: Inappropriate ioctl for device
ohasd failed to start at /u01/app/11.2.0/grid/crs/install/rootcrs.pl line 443.
[root@mylinux1 grid]#

CAUSE: Previous install of clusterware not properly removed.

FIX:

1. Execute the script $CRS_HOME/crs/install/rootcrs.pl -deconfig -force

[root@mylinux1 install]# ./rootcrs.pl -deconfig -force
2010-04-1515:43:23: Parsing the host name
2010-04-1515:43:23: Checking for super user privileges
2010-04-1515:43:23: User has super user privileges
Using configuration parameter file: ./crsconfig_params
PRCR-1035 : Failed to look up CRS resource ora.cluster_vip.type for 1
PRCR-1068 : Failed to query resources
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.gsd is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.ons is registered
Cannot communicate with crsd
PRCR-1070 : Failed to check if resource ora.eons is registered
Cannot communicate with crsd

ACFS-9200: Supported
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4000: Command Stop failed, or completed with errors.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on ‘mylinux1’
CRS-2673: Attempting to stop ‘ora.cssdmonitor’ on ‘mylinux1’
CRS-2673: Attempting to stop ‘ora.evmd’ on ‘mylinux1’
CRS-2673: Attempting to stop ‘ora.mdnsd’ on ‘mylinux1’
CRS-2673: Attempting to stop ‘ora.gpnpd’ on ‘mylinux1’
CRS-2677: Stop of ‘ora.cssdmonitor’ on ‘mylinux1’ succeeded
CRS-2677: Stop of ‘ora.mdnsd’ on ‘mylinux1’ succeeded
CRS-2677: Stop of ‘ora.evmd’ on ‘mylinux1’ succeeded
CRS-2677: Stop of ‘ora.gpnpd’ on ‘mylinux1’ succeeded
CRS-2673: Attempting to stop ‘ora.gipcd’ on ‘mylinux1’
CRS-2677: Stop of ‘ora.gipcd’ on ‘mylinux1’ succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on ‘mylinux1’ has completed
CRS-4133: Oracle High Availability Services has been stopped.
Successfully deconfigured Oracle clusterware stack on this node
[root@mylinux1 install]#

2. Once you have received the message “Successfully deconfigured Oracle clusterware”, then you can re-execute the shell script $CRS_HOME/root.sh

[root@mylinux1 grid]# ./root.sh
…
…
…
…
Configure Oracle Grid Infrastructure for a Cluster … succeeded
Updating inventory properties for clusterware
Starting Oracle Universal Installer…

Checking swap space: must be greater than 500 MB. Actual 17625 MB Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /u01/app/oraInventory
‘UpdateNodeList’ was successful.
[root@mylinux1 grid]#

This completes the resolution to Oracle clusterware errors CRS-4640 and CRS-4000 during the execution of root.sh.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

Creation of Oracle OS accounts for 11g RAC install on Linux

Deployment of Oracle software requires the creation of multiple groups and users on UNIX and Linux servers to properly configure and administrate. This procedure covers the basic creation of Oracle groups and users for a standard 11g RAC implementation.

1. Create group accounts on all nodes: NOTE: id must be exactly the same

/usr/sbin/groupadd -g 501 oinstall
/usr/sbin/groupadd -g 502 dba
/usr/sbin/groupadd -g 504 asmadmin
/usr/sbin/groupadd -g 506 asmdba
/usr/sbin/groupadd -g 507 asmoper

2. Create user accounts on all nodes: NOTE: id must be exactly the same

/usr/sbin/useradd -u 501 -g oinstall -G asmadmin,asmdba,asmoper grid
/usr/sbin/useradd -u 502 -g oinstall -G dba,asmdba oracle

3. Set password for GRID and ORACLE account to be the same on every node.

passwd oracle
Changing password for user oracle.
New UNIX password: password
retype new UNIX password: password
passwd: all authentication tokens updated successfully.
passwd grid
Changing password for user oracle.
New UNIX password: password
retype new UNIX password: password
passwd: all authentication tokens updated successfully.

This completes the standard configuration creation of Oracle groups and users on Linux.

Larry Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

Configuration of OCFS2 in LINUX

OCFS2 Oracle Cluster File System version 2 is a file system which allows for multiple machines to open the same files at the same time without corruption. This file system can be used for multiple reasons but is mostly seen in Oracle RAC systems. This article details the configuration of OCFS2 after the RPMs have been installed on your OS.

1. Logon to your Linux server as root.

2. Create the directory /etc/ocfs2 to house your oracle cluster.conf file. This file will contain the name of your cluster and all nodes with in that cluster.

[root@mylinux1 etc]# mkdir /etc/ocfs2
[root@mylinux1 etc]# chmod 775 /etc/ocfs2

3. Edit the file cluster.conf and enter the strings similar to below; changing the value of ip_address, name and cluster to values which are correct for your installation..

[root@mylinux1 etc]# vi /etc/ocfs2/cluster.conf

node:
ip_port = 7777
ip_address = 204.34.132.38
number = 0
name = mylinux1.mydomain.com
cluster = myrac

node:
ip_port = 7777
ip_address = 204.34.132.39
number = 1
name = mylinux2.mydomain.com
cluster = myrac

cluster:
node_count = 2
name = myrac

4. Configure the ocfs2 installed on each node of the RAC with the o2cb configure command. NOTE: Enter the following:


Load O2CB driver on boot (y/n) [y] = y
Cluster stack backing O2CB [o2cb] = o2cb
Cluster to start on boot (Enter “none” to clear) [ocfs2]: = name of the cluster in cluster.conf file for this example it is myrac
Specify heartbeat dead threshold (>=7) [31] = 31
Specify network idle timeout in ms (>=5000) [30000] = 30000
Specify network keepalive delay in ms (>=1000) [2000] = 2000
Specify network reconnect delay in ms (>=2000) [2000] = 2000

Example:
[root@mylinux1 etc]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets (‘[]’). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]:
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter “none” to clear) [ocfs2]: myrac
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster myrac: OK
[root@mylinux1 etc]#

[root@mylinux2 etc]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets (‘[]’). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter “none” to clear) [ocfs2]: myrac
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading filesystem “configfs”: OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem “ocfs2_dlmfs”: OK
Creating directory ‘/dlm’: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster myrac: OK
[root@mylinux2 etc]#


This completes configuration of OCFS2 for Oracle RAC.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

Unable to mount OCFS2 drives

Oracle provides the file system OCFS2 to support Oracle RAC file storage. This file system provides for a locking mechanism which allows files to be accessed by multiple Oracle instances while avoiding corruption. The OSFS2 file system binaries must be started before any OCFS2 formatted mount points can be accessed. This article shows the error generated when the OCFS2 stack has not been started and how to resolve the problem.

General OS error:

[root@mylinux init.d]# mount /dev/mapper/MPATH10 /u02
mount.ocfs2: Unable to access cluster service while trying initialize cluster

Resolution:

1. Logon to your server as root.
2. Change directory to /etc/init.d

[root@mylinux /]# cd /etc/init.d
[root@mylinux init.d]# pwd
/etc/init.d
[root@mylinux init.d]#

3. Execute the OS layer command ./ocfs2 load.

[root@mylinux init.d]# ./o2cb load
Loading filesystem “configfs”: OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem “ocfs2_dlmfs”: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
[root@mylinux init.d]#

4. Execute the OS layer command ./ocfs2 online

[root@mylinux init.d]# ./o2cb online
Starting O2CB cluster ocfs2: OK
[root@mylinux init.d]#

5. Attempt to mount your ocfs2 storage device.

[root@mylinux init.d]# mount /dev/mapper/MPATH10 /u02
[root@mylinux init.d]#

6. This completes restarting OCFS2 binaries.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com