Tag Archives: ocfs2

Formatting of OCFS2 Shared Storage

The Oracle OCFS2 file system can be used to format shared storage for multiple node access. To accomplish this task, OCFS2 rpm must be installed and configured. Below is the procedure for preparing the shared disks for use, once OCFS2 has been installed and configured.

1. Logon to one of your Oracle servers as the root user.

2. Locate the shared storage presented in the directory /dev/mapper on both nodes.

-mylinux1
[root@mylinux1 ~]# ll /dev/mapper
total 0
crw——- 1 root root 10, 63 Jan 22 17:44 control
brw-rw—- 1 root disk 253, 3 Jan 22 17:45 DATA-50GB-02
brw-rw—- 1 root disk 253, 8 Jan 22 17:45 DATA-50GB-03
brw-rw—- 1 root disk 253, 1 Jan 22 17:45 VOTE-1GB-05
brw-rw—- 1 root disk 253, 2 Jan 22 17:45 VOTE-1GB-06
brw-rw—- 1 root disk 253, 4 Jan 22 17:45 HOME-50GB-02
[root@mylinux1 ~]#

-mylinux2
[root@mylinux2 ~]# ll /dev/mapper
total 0
crw——- 1 root root 10, 63 Jan 22 17:44 control
brw-rw—- 1 root disk 253, 3 Jan 22 17:45 DATA-50GB-02
brw-rw—- 1 root disk 253, 8 Jan 22 17:45 DATA-50GB-03
brw-rw—- 1 root disk 253, 1 Jan 22 17:45 VOTE-1GB-05
brw-rw—- 1 root disk 253, 2 Jan 22 17:45 VOTE-1GB-06
brw-rw—- 1 root disk 253, 4 Jan 22 17:45 HOME-50GB-03
[root@mylinux2 ~]#

3. The shared storage will be mounted on each node of the RAC. In this example our shared storage includes the following:


/dev/mapper/DATA-50GB-02
/dev/mapper/DATA-50GB-03
/dev/mapper/VOTE-1GB-05
/dev/mapper/VOTE-1GB-06

4. As the root user, format each shared storage device with the following command: NOTE: formatting of the share disks is only performed on a single node, repeating on additional nodes will destroy all information.

Example:

/sbin/mkfs.ocfs2 /dev/mapper/DATA-50GB-02

[root@mylinux1 ~]# /sbin/mkfs.ocfs2 /dev/mapper/DATA-50GB-02
mkfs.ocfs2 1.4.4
Cluster stack: classic o2cb
Overwriting existing ocfs2 partition.
Proceed (y/N): y
Label:
Features: sparse backup-super unwritten inline-data strict-journal-super
Block size: 2048 (11 bits)
Cluster size: 4096 (12 bits)
Volume size: 1069252608 (261048 clusters) (522096 blocks)
Cluster groups: 17 (tail covers 7096 clusters, rest cover 15872 clusters)
Extent allocator size: 4194304 (1 groups)
Journal size: 33554432
Node slots: 2
Creating bitmaps: done
Initializing superblock: done
Writing system files: done
Writing superblock: done
Writing backup superblock: 0 block(s)
Formatting Journals: done
Growing extent allocator: done
Formatting slot map: done
Writing lost+found: done
mkfs.ocfs2 successful

[root@mylinux1 ~]#

This procedure is repeated for each shared storage device.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

Configuration of OCFS2 in LINUX

OCFS2 Oracle Cluster File System version 2 is a file system which allows for multiple machines to open the same files at the same time without corruption. This file system can be used for multiple reasons but is mostly seen in Oracle RAC systems. This article details the configuration of OCFS2 after the RPMs have been installed on your OS.

1. Logon to your Linux server as root.

2. Create the directory /etc/ocfs2 to house your oracle cluster.conf file. This file will contain the name of your cluster and all nodes with in that cluster.

[root@mylinux1 etc]# mkdir /etc/ocfs2
[root@mylinux1 etc]# chmod 775 /etc/ocfs2

3. Edit the file cluster.conf and enter the strings similar to below; changing the value of ip_address, name and cluster to values which are correct for your installation..

[root@mylinux1 etc]# vi /etc/ocfs2/cluster.conf

node:
ip_port = 7777
ip_address = 204.34.132.38
number = 0
name = mylinux1.mydomain.com
cluster = myrac

node:
ip_port = 7777
ip_address = 204.34.132.39
number = 1
name = mylinux2.mydomain.com
cluster = myrac

cluster:
node_count = 2
name = myrac

4. Configure the ocfs2 installed on each node of the RAC with the o2cb configure command. NOTE: Enter the following:


Load O2CB driver on boot (y/n) [y] = y
Cluster stack backing O2CB [o2cb] = o2cb
Cluster to start on boot (Enter “none” to clear) [ocfs2]: = name of the cluster in cluster.conf file for this example it is myrac
Specify heartbeat dead threshold (>=7) [31] = 31
Specify network idle timeout in ms (>=5000) [30000] = 30000
Specify network keepalive delay in ms (>=1000) [2000] = 2000
Specify network reconnect delay in ms (>=2000) [2000] = 2000

Example:
[root@mylinux1 etc]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets (‘[]’). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [y]:
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter “none” to clear) [ocfs2]: myrac
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Starting O2CB cluster myrac: OK
[root@mylinux1 etc]#

[root@mylinux2 etc]# /etc/init.d/o2cb configure
Configuring the O2CB driver.

This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets (‘[]’). Hitting
without typing an answer will keep that current value. Ctrl-C
will abort.

Load O2CB driver on boot (y/n) [n]: y
Cluster stack backing O2CB [o2cb]:
Cluster to start on boot (Enter “none” to clear) [ocfs2]: myrac
Specify heartbeat dead threshold (>=7) [31]:
Specify network idle timeout in ms (>=5000) [30000]:
Specify network keepalive delay in ms (>=1000) [2000]:
Specify network reconnect delay in ms (>=2000) [2000]:
Writing O2CB configuration: OK
Loading filesystem “configfs”: OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem “ocfs2_dlmfs”: OK
Creating directory ‘/dlm’: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster myrac: OK
[root@mylinux2 etc]#


This completes configuration of OCFS2 for Oracle RAC.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

Download and Installation of OCFS2 RPMs for Linux

Oracle Cluster File System 2 (OCFS2) is a file system which allows for multiple hosts to access the same files on a shared storage at the same time. This type of access is required for deployment of an Oracle RAC system. In this article we will cover the procedure to download and install the RPMs for OCFS2 to support shared storage on an Oracle RAC system.

1. Determine the current Kernel installed on all RAC nodes. NOTE: The kernels must be the same on every RAC node.

NODE 1:
[root@mylinux1 etc]# uname -r
2.6.18-194.32.1.el5
[root@mylinux1 etc]#

NODE2:
[root@mylinux2 ~]# uname -r
2.6.18-194.32.1.el5
[root@mylinux2 ~]#

2. Go to the URL: http://oss.oracle.com/projects/ocfs2/ , select the download tab and navigate to the correct rpm download for your kernel.

2.6.18-194.32.1.el5
2011.01.20
Packages for RHEL5 2.6.18-194.32.1.el5

3. Go to the URL: http://oss.oracle.com/projects/ocfs2-tools/, select the download tab and navigate to the correct rpm downloads for your OS. Example for this OS we download the following files:

ocfs2-tools-1.4.4-1.el5.x86_64.rpm
2010.04.19 7a2f59a05f2cf1bea24dc04f34b09371
OCFS2 tools
ocfs2-tools-debuginfo-1.4.4-1.el5.x86_64.rpm
2010.04.19 91d6e65e902dedcd28e8e4f2d9fb4271
OCFS2 tools debuginfo
ocfs2-tools-devel-1.4.4-1.el5.x86_64.rpm
2010.04.19 2e47beaab89ebba8b1d276fb894184d5
OCFS2 tools libraries/header
ocfs2console-1.4.4-1.el5.x86_64.rpm
2010.04.19 78ccf0cf8564a6d5b48d534c7f3a07bc

4. Once the download completes transfer all the files to all nodes in the cluster. It is best at this point to create a temporary directory under /tmp, to store your files with the following command.

[root@mylinux1 tmp]# mkdir oracle_tmp
[root@mylinux1 tmp]# chmod 777 oracle_tmp
[root@mylinux1 tmp]#

5. Once the files are in location, logon as root and install using the rpm command on all nodes of the RAC.

rpm -Uvh ocfs2-tools-1.4.4-1.el5.x86_64.rpm
rpm -Uvh ocfs2-2.6.18-194.32.1.el5-1.4.7-1.el5.x86_64.rpm
rpm –Uvh ocfs2console-1.4.4-1.el5.x86_64.rpm

[root@mylinux1 oracle_tmp]# rpm -Uvh ocfs2-tools-1.4.4-1.el5.x86_64.rpm
warning: ocfs2-tools-1.4.4-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:ocfs2-tools ########################################### [100%]
[root@mylinux1 oracle_tmp]# rpm -Uvh ocfs2-2.6.18-194.32.1.el5-1.4.7-1.el5.x86_64.rpm
warning: ocfs2-2.6.18-194.32.1.el5-1.4.7-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:ocfs2-2.6.18-194.32.1.el########################################### [100%]
[root@mylinux1 oracle_tmp]# rpm -Uvh ocfs2console-1.4.4-1.el5.x86_64.rpm
warning: ocfs2console-1.4.4-1.el5.x86_64.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159
Preparing… ########################################### [100%]
1:ocfs2console ########################################### [100%]
[root@mylinux1 oracle_tmp]#

This completes the download and installation of OCFS2 on Linux to support an Oracle RAC system.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

Unable to mount OCFS2 drives

Oracle provides the file system OCFS2 to support Oracle RAC file storage. This file system provides for a locking mechanism which allows files to be accessed by multiple Oracle instances while avoiding corruption. The OSFS2 file system binaries must be started before any OCFS2 formatted mount points can be accessed. This article shows the error generated when the OCFS2 stack has not been started and how to resolve the problem.

General OS error:

[root@mylinux init.d]# mount /dev/mapper/MPATH10 /u02
mount.ocfs2: Unable to access cluster service while trying initialize cluster

Resolution:

1. Logon to your server as root.
2. Change directory to /etc/init.d

[root@mylinux /]# cd /etc/init.d
[root@mylinux init.d]# pwd
/etc/init.d
[root@mylinux init.d]#

3. Execute the OS layer command ./ocfs2 load.

[root@mylinux init.d]# ./o2cb load
Loading filesystem “configfs”: OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem “ocfs2_dlmfs”: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
[root@mylinux init.d]#

4. Execute the OS layer command ./ocfs2 online

[root@mylinux init.d]# ./o2cb online
Starting O2CB cluster ocfs2: OK
[root@mylinux init.d]#

5. Attempt to mount your ocfs2 storage device.

[root@mylinux init.d]# mount /dev/mapper/MPATH10 /u02
[root@mylinux init.d]#

6. This completes restarting OCFS2 binaries.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com

Manually initializing Oracle OCFS2 stack

Oracle provides the file system OCFS2 to support Oracle RAC file storage. This file system provides for a locking mechanism which allows files to be accessed by multiple Oracle instances while avoiding corruption. The OSFS2 file system binaries must be started before any OCFS2 formatted mount points can be accessed. This article shows the error generated when the OCFS2 stack has not been started and how to resolve the problem.

General OS error:


[root@mylinux /]# mount /dev/mapper/MPATH10 /u02
mount.ocfs2: Unable to access cluster service while trying initialize cluster

Resolution:

1. Logon to your server as root.
2. Execute the OS layer command /etc/init.d/o2cb enable

[root@mylinux /]# /etc/init.d/o2cb enable
Loading filesystem “configfs”: OK
Mounting configfs filesystem at /sys/kernel/config: OK
Loading filesystem “ocfs2_dlmfs”: OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
[root@mylinux /]#

3. Execute the OS layer command /etc/init.d/o2cb start

[root@mylinux /]# /etc/init.d/o2cb start
Starting O2CB cluster ocfs2: OK
[root@mylinux /]#

4. Attempt to mount your ocfs2 storage device.

[root@mylinux /]# mount /dev/mapper/MPATH10 /u02
[root@mylinux /]#

5. This completes restarting OCFS2 binaries.

Larry J. Catt, OCP 9i, 10g
oracle@allcompute.com
www.allcompute.com