Monday, April 2, 2012

Configure RedHat Cluster with GFS2 on RedHat Enterprise Linux 6 on VMware ESXi

1) Installing RHEL6 on Vmware esxi with clustering packages.
a) Creating a RedHat Enterprise Linux 6.0 Virtual image
i) Open vSphere Client by connecting to a Vmware ESXi Server.
ii) Login into your vSphere Client
iii) Goto File -> New -> Virtual Machine (VM).
iv) Select Custom option in Create New Virtual Machine Window and click Next
v) Give a name to the virtual machine(VM) ( In my case name of my virtual machine is – RHEL6-ClusterNode1) and click next.
vi) Select a resource pool where you want your VM to reside ( In my case , I have created a resource pool named RHEL6-Cluster.) and click Next.
vii) Select a datastore to store your VM files and Click Next.
viii) Select VM version which is suitable for your environment.( In my case VM version is 7) and click Next.
ix) Specify the guest operating system type as Linux and select version as RedHat Enterprise Linux 6.0 -32 bit. Click Next.
x) Select number of CPU for the VM ( you can assign multiple CPU if your processor is multicore.) (in my case : I had assigned 1 cpu) and Click Next.
xi) Configure the memory for your VM (assign the memory wisely, so that VM performance is not degraded when multiple VM’s run in parallel). Click Next.
xii) Create Network Connection for your VM ( generally do not change the default connection ) . Click Next.
xiii) Select SCSI controller as LSI Logic Parallel , Click Next.
xiv) Select “Create New Virtual Disk” and Click Next.
xv) Allocate virtual disk capacity for the VM as needed.( In my case : virtual disk size was assigned as 10GB. Select “Support Clustering features such as fault tolerance. Select “ Specify a datastore “ and assign a datastore to store the VM. Click Next
xvi) Under Advanced options, Let the Virtual Device Node be SCSI(0:0). Click Next.
xvii) On “the Ready to Complete” window select “Edit the virtual machine settings before completion “ and Click continue.
xviii) On the “ RHEL6-Cluster1 – VM properties window”, select New SCSI controller and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s
xix) Similarly for “New CD/DVD” supply either client device or host device or Operating system installer ISO file located on the datastore to start the installation of the operating system. Note: do not forget to enable “Connect at power on “ option for Host Device or Datastore ISO device option.
xx) Now Click Finish, No you are ready to start the installation of the RHEL6 operating system on Virtual Machine.

2) Installing RedHat Enterprise for Linux 6.0 on the Virtual Machine.
a) File System Partitioning for the RHEL6.0 VM.
i) Start the RHEL Installation.
ii) Select custom partitioning for disk.
iii) Create a /boot partition of 512MB
iv) Create physical LVM Volume from remaining free space on the virtual disk.
v) Create logical volume group and create a logical volume for swap and “/” on the available LVM disk space.
vi) Apply the above changes to create the partition structure.

b) Selecting the packages required for clustering
i) Select the packages to be installed on to the disk by selecting custom package selection ( Enable additional repository High Availability, Resilient storage
ii) Select all packages under High Availability, Resilient storage. Click next to start installation of the operating system. Note : At the end of the installation cman, luci, ricci, rgmanager, clvmd, modclusterd, gfs2-tools packages will get installed onto the system.
iii) After the operating system is installed, Restart the VM to boot into the VM and perform post installation tasks and shutdown the guest RHEL6.0 VM.

3) Cloning the RHEL6.0 VM image into two copies named as RHEL6-Cluster2 and RHEL6-Cluster3.
i) Open the datastore of your VMware ESXi by right clicking and selecting “Browse Datastore” on the datastore in the summary page of the ESXi console.
ii) Create two directories RHEL6-Cluster2 and RHEL6-Cluster3
iii) Copy the VM image files from RHEL6-Cluster1 directory to above two directories i.e., RHEL6-Cluster2 and RHEL6-Cluster3.
iv) Once you have copied the all the files to respective directory, browse to RHEL6-Cluster2 directory under datastore and locate “RHEL6-Cluster1.vmx” file, right click on it and select “Add to Inventory”.
v) In the “Add to Inventory” window add the VM as RHEL6-Cluster2 and finish the process
vi) Similarly perform previous step to add RHEL6-cluster3 to the inventory.

4) Adding a shared harddisk to all the 3 VM’s
a) Adding a hard disk for clustering to RHEL6-Cluster1 VM/node.
i) In vSphere Client select RHEL6-Cluster1 VM , Open Virtual Machine Properties window by right clicking and selecting “Edit Settings”.
ii) Click on “Add” in Virtual Machine Properties window , Add hardware window pops up.
iii) Select Hard Disk as device type, Click Next.
iv) Select “ Create a new virtual disk” and click Next.
vii) Specify the required disk size and select Disk Provisioning as “Support clustering features such fault tolerance” and Location as “Store with the virtual machine” Click Next.
viii) vi) In the Advanced Options window, Select the Virtual Device Node as : SCSI (1:0). Click Next. Complete the “Add hardware “ process. vii) On the “ RHEL6-Cluster1 – VM properties window”, select SCSI controller 1 and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s.

b) Sharing the RHEL6-Cluster1 node’s additional hard disk with other two VM/cluster nodes.
i) In vSphere Client select RHEL6-Cluster2 VM , Open Virtual Machine Properties window by right clicking and selecting “Edit Settings”.
ii) ii) Click on “Add” in Virtual Machine Properties window , Add hardware window pops up.
iii) iii) Select Hard Disk as device type, Click Next.
iv) iv) Select “Use an existing virtual disk” and click Next.
v) v) Browse the datastore, locate RHEL6-cluster1’ directory and select RHEL6-Cluster1_1.vmdk ( Note : Additional hardisk will named as VMname _1 or 2 or 3.vmdk . Do not select RHEL6-Cluster1.vmdk as this your VMimage file) to add as second hard disk to the VM. Click Next.
vi) vi) In the Advanced Options window, Select the Virtual Device Node as : SCSI (1:0). Click Next. Complete the “Add hardware “ process.
vii) On the “ RHEL6-Cluster2 – VM properties window”, select SCSI controller 1 and change the SCSI bus sharing type from None to “Virtual” so that virtual disks can be shared between VM’s.

c) Similarly perform the above steps described under section (b) for the 3rd node.

5) Configuring the static IP address, hostname and /etc/hosts file on all three nodes. 
Assign the static IP addresses to all the three VM as below
Ex : RHEL6-Cluster1 RHEL6-Cluster2 RHEL6-Cluster3
Gateway in this case is : DNS in this case is :
DOMAIN in this case is:
i) To Assign above IP and hostname Start all the three VM’s
ii) ii) Note : When you have started the VM, The network manager daemon/service on RHEL6 would have started the network by getting an ipaddress from DHCP and assigning it to eth0 or eth1. Note down the hardware address of your Active Ethernet by running ifcfg command ( The HWaddr would be like 00:0C:29:86:D3:E6 etc as this needed to added into “/etc/sysconfig/network-scripts/ifcfg-eth0” depending upon which Ethernet port is active on your image.).
iii) iii) Disable and stop the Network Manager daemon as other cluster related daemons require this daemon to be off.
To stop the network manager daemon, run
/etc/init.d/NetworkManager stop
To disable the network manager daemon service , run
Chkconfig –level 345 NetworkManager off
iv) Add the following details to “/etc/sysconfig/network-scripts/ifcfg-eth0” file
NAME="System eth0"
Note : HWADDR and DEVICE may change from VM to VM.
v) Now add hostnames as RHEL6-cluster1 or RHEL6-Cluster2 or RHEL6-Cluster3 to “/etc/sysconfig/network” file inside each VM.
After adding the hostname the “/etc/sysconfig/network” file will look like as below: NETWORKING=yes

vi) Now add hostname resolution information to /etc/hosts file. As below.
# RHEL6-Cluster1 # Added by NetworkManager
# localhost.localdomain localhost
#::1 RHEL6-Cluster1 localhost6.localdomain6 localhost6
Note : Similarly perform the above steps on the other two VM’s .
vii) After configuring all the 3 VM’s , Restart the VM’s and Verify the network connection by pinging each other VM to confirm the network configuration is correct and working fine.

6) Configuring the cluster on RHEL6.0 with High Availability Management web UI.
i) Start the luci service on all the 3 nodes by running command in terminal.
service lcui start
ii) Start the ricci service on all the 3 nodes by running the command in terminal. Ricci daemon runs on 11111 port.
service ricci start
iii) Open the browser, type to High Availability Management Console.
iv) Login into the console with your root user credentials.
v) Create a cluster as “mycluster”
vi) Add All the 3 client nodes to the cluster as below:
 Node Host name                      Root Password Ricci Port *********   11111 *********    11111 *********    11111
Click on “Create Cluster” to create and Add all the nodes to the cluster.
By performing above action , the all the 3 nodes are now part of the cluster “mycluster” . cluster.conf under “/etc/cluster/cluster.conf “ on all the three nodes.

7) Creating GFS2 file system with clustering.
a) Once you have created a cluster and added all the 3 nodes as cluster member. Run the following command on all three nodes to verify the cluster node status.
[root@RHEL6-Cluster1 ~]# ccs_tool lsnode
Cluster name: mycluster, config_version: 1
 Nodename                                     Votes Nodeid Fencetype           1       1           1       2           1       3

b) Now start the cman and rgmanager service on all 3 nodes by running command
service cman start
service rgmanager start

c) now check the status of your cluster by running the commands below.

[root@RHEL6-Cluster1 ~]# clustat
Cluster Status for mycluster @ Wed Jul 16 16:27:36 2012
Member Status: Quorate
Member Name ID Status 1 Online, Local 2 Online 3 Online

[root@RHEL6-Cluster1 ~]# cman_tool status
Version: 6.2.0
Config Version: 1
Cluster Name: mycluster
Cluster Id: 65461
Cluster Member: Yes
Cluster Generation: 48
Membership state: Cluster-Member
Nodes: 3
Expected votes: 3
Total votes: 3
Node votes: 1
Quorum: 2
Active subsystems: 9
Ports Bound: 0 11 177
Node name:
Node ID: 1
Multicast addresses:
Node addresses:

d) Now we need enable clustering on LVM2 by running the command as below:
lvmconf –enable-cluster

e) Now we need to create the LVM2 volumes on the additional hard disk attached to the VM. Follow below commands exactly.

pvcreate /dev/sdb vgcreate –c y mygfstest_gfs2 /dev/sdb
lvcreate –n mytestGFS2 –L 7G mygfstest_gfs2

Note : By executing the above list of commands serially we would have created physical lvm volume, volume group and logical volume.

f) Now start clvmd service on all 3 nodes by running:

service clvmd start

g) Now we have to create a GFS2 file system on the above LVM volume. To create the GFS2 file system , run the command as below: Format of the command is as below.

mkfs -t -p -t : -j
mkfs -t gfs2 -p lock_dlm -t mycluster:mygfs2 -j 4 /dev/mapper/mygfstest_gfs2-mytestGFS2

this will format the LVM device and create a GFS2 file system .

h) Now we have to mount the GFS2 file system on all the 3 nodes by running the command as below:
mount /dev/mapper/mygfstest_gfs2-mytestGFS2 /GFS2
where /GFS2 is mount point. You might need to create /GFS2 directory to create a mount point for the GFS2 file system.
Congrats, your GFS2 file system setup with cluster is ready for use. Run the below command to know the size and mount details of the file system by running:
df -kh
8) Now that we have a fully functional cluster and a mountable GFS2 file system, we need to make sure all the necessary daemons start up with the cluster whenever VM are restarted.

chkconfig --level 345 luci on
chkconfig --level 345 ricci on
chkconfig --level 345 rgmanager on
chkconfig --level 345 clvmd on
chkconfig --level 345 cman on
chkconfig --level 345 modclusterd on
chkconfig --level 345 gfs2 on
If you want the GFS2 file system to be mounted at startup you can add the filesytem and mount point details to /etc/fstab file

echo "/dev/mapper/mygfstest_gfs2-mytestGFS2 /GFS2 gfs2 defaults,noatime,nodiratime 0 0" >> /etc/fstab

DISCLAIMER: The information provided on this website comes without warranty of any kind and is distributed AS IS. Every effort has been made to provide the information as accurate as possible, but no warranty or fitness is implied. The information may be incomplete, may contain errors or may have become out of date. The use of this information described herein is your responsibility, and to use it in your own environments do so at your own risk.

Copyright © 2012 LINUXHOWTO.IN


  1. plz do recorrect .. .
    it's luci and not lcui

  2. You have not configured fencing at all.

    Sure, the cluster can start without it... but it won't recover from a node failure (or a loss of network communications with one node) automatically.

    Instead, the GFS will be "frozen" and the nodes will wait forever. Fencing is an absolutely required component in RedHat clusters: without it, the cluster is not really a High-Availability solution, but more of a fault amplifier.

  3. copy tutorial !!! hahaha

  4. Thanks this is useful stuff. Do you also work in any other cluster?

  5. توصيات الاسهم السعودية غداً من أهم الوسائل التي تساعد المتداولين في تحقيق المكاسب