This how to describe an easy step by step installation of the RadHat Cluster
Suite on three Centos Nodes and prepare two of them as nodes of cluster and one
as a shared storage of cluster.
Here we create a cluster with 192.168.0.100 IP which shares the services from 192.168.0.101 and 192.168.0.102 nodes, and use a GFS filesystem reachable with iSCSI on 192.168.0.103 .
Hardware requirements:
One bare-metal server for the cluster master, and two servers (virtual or physical) for the cluster / Storage nodes.
Software requirements:
RHEL6, and a RHEL subscription to the High Availability package/add-on channel. We will be working with the packages luci, ricci, rgmanager, and cman.
Prerequisites:
On all the Nodes
#service iptables stop
#service ip6tables stop
#chkconfig iptables off
#chkconfig ip6tables off
Disable Selinux.
#vi /etc/selinux/config
SELINUX=disabled
[root@node1 ~]# /etc/init.d/NetworkManager stop
[root@node1 ~]# chkconfig NetworkManager off
You will probably need to set the fully qualified domain name of each cluster node as its host name, and in its own /etc/hosts file, such as :
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.0.101 node1.com
192.168.0.102 node2.com
192.168.0.103 iscsi.com [Add these entry to all the Nodes]
/etc/sysconfig/network
HOSTNAME=node1.com [Change node2.com on Second node and iscsi.com on Storage Node]
Package Installation:
ON NODE1
[root@node1 ~]# yum install cman rgmanager luci
Start luci... [ OK ]
Point your web browser to https://node1.com:8084 (or equivalent) to access luci
[root@node1 ~]# yum install ricci ; passwd ricci ; service ricci start ; chkconfig ricci on ;service rgmanager start ; chkconfig rgmanager on ; service cman start ; chkconfig cman on
ON CLUSTER NODES Including NODE1
[root@node2 ~]# yum install cman rgmanager ricci ; passwd ricci ; service ricci start ; chkconfig ricci on ; service rgmanager start ; chkconfig rgmanager on ; service cman start ; chkconfig cman on
#yum install lvm2 -y
#yum install iscsi-initiator-utils* -y
ON Storage Node
[root@isci ~]# yum install iscsi-target-utils* -y
vi /etc/tgt/targets.conf
# Sample target with one LUN only. Defaults to allow access for all initiators:
<target iqn.20013-14.com.iscsi:server.target1>
backing-store /dev/sdb
</target>
#/etc/init.d/tgtd start
# chkcongif tgtd on
Proof of Concept:
[root@node2 ~]# iscsiadm -m node -T iqn.20013-14.com.iscsi:server.target1 -p 192.168.0.103 --login
Here we create a cluster with 192.168.0.100 IP which shares the services from 192.168.0.101 and 192.168.0.102 nodes, and use a GFS filesystem reachable with iSCSI on 192.168.0.103 .
Hardware requirements:
One bare-metal server for the cluster master, and two servers (virtual or physical) for the cluster / Storage nodes.
Software requirements:
RHEL6, and a RHEL subscription to the High Availability package/add-on channel. We will be working with the packages luci, ricci, rgmanager, and cman.
Prerequisites:
On all the Nodes
#service iptables stop
#service ip6tables stop
#chkconfig iptables off
#chkconfig ip6tables off
Disable Selinux.
#vi /etc/selinux/config
SELINUX=disabled
[root@node1 ~]# /etc/init.d/NetworkManager stop
[root@node1 ~]# chkconfig NetworkManager off
You will probably need to set the fully qualified domain name of each cluster node as its host name, and in its own /etc/hosts file, such as :
[root@node1 ~]# cat /etc/hosts
127.0.0.1 localhost.localdomain localhost
::1 localhost6.localdomain6 localhost6
192.168.0.101 node1.com
192.168.0.102 node2.com
192.168.0.103 iscsi.com [Add these entry to all the Nodes]
/etc/sysconfig/network
HOSTNAME=node1.com [Change node2.com on Second node and iscsi.com on Storage Node]
Package Installation:
ON NODE1
[root@node1 ~]# yum install cman rgmanager luci
At the computer running luci,
initialize the luci server using the luci_admin
init command. For example:
# luci_admin init
Initializing the Luci server
Creating the 'admin' user
Enter password: <Type password and press ENTER.>
Confirm password: <Re-type password and press ENTER.>
Please wait...
The admin password has been successfully set.
Generating SSL certificates...
Luci server has been successfully initialized
Restart the Luci server for changes to take effect
eg. service luci restart
[root@node1 ~]# service luci
startStart luci... [ OK ]
Point your web browser to https://node1.com:8084 (or equivalent) to access luci
[root@node1 ~]# yum install ricci ; passwd ricci ; service ricci start ; chkconfig ricci on ;service rgmanager start ; chkconfig rgmanager on ; service cman start ; chkconfig cman on
At a Web browser, place the URL of the luci server
into the URL address box and click Go (or the equivalent). The
URL syntax for the luci server is https://node1.com:8084. The first time you access luci, two SSL
certificate dialog boxes are displayed. Upon acknowledging the dialog boxes,
your Web browser displays the luci login page
ON CLUSTER NODES Including NODE1
[root@node2 ~]# yum install cman rgmanager ricci ; passwd ricci ; service ricci start ; chkconfig ricci on ; service rgmanager start ; chkconfig rgmanager on ; service cman start ; chkconfig cman on
#yum install lvm2 -y
#yum install iscsi-initiator-utils* -y
ON Storage Node
[root@isci ~]# yum install iscsi-target-utils* -y
[root@iscsi ~]# rpm -qa | grep target*
selinux-policy-targeted-3.7.19-195.el6.noarch
scsi-target-utils-1.0.24-10.el6.x86_64
vi /etc/tgt/targets.conf
# Sample target with one LUN only. Defaults to allow access for all initiators:
<target iqn.20013-14.com.iscsi:server.target1>
backing-store /dev/sdb
</target>
#/etc/init.d/tgtd start
# chkcongif tgtd on
Proof of Concept:
[root@node1 ~]#
iscsiadm -m node -T iqn.20013-14.com.iscsi:server.target1 -p
192.168.0.103 --login
Logging in to [iface: default, target:
iqn.20013-14.com.iscsi:server.target1, portal: 192.168.0.103,3260] (multiple)
Login to [iface: default, target:
iqn.20013-14.com.iscsi:server.target1, portal: 192.168.0.103,3260] successful.
[root@node2 ~]# iscsiadm -m node -T iqn.20013-14.com.iscsi:server.target1 -p 192.168.0.103 --login
Logging in to [iface: default, target:
iqn.20013-14.com.iscsi:server.target1, portal: 192.168.0.103,3260] (multiple)
Login to [iface: default, target:
iqn.20013-14.com.iscsi:server.target1, portal: 192.168.0.103,3260] successful.
check the iSCSI mapped device is /dev/sdb (otherwise change the
following commands), then proceed creating a new Physical Volume, a new
Volume Group and a new Logical Volume to use as a shared storage for
cluster nodes, by using the following commands :
[root@node1 ~]dmesg
sdb: unknown partition table
sd 3:0:0:1: [sdb] Attached SCSI disk
[root@node1 ~]pvcreate /dev/sdb1
[root@node1 ~]# vgcreate -c y redhat /dev/sdb1
Clustered volume
group "redhat" successfully created
[root@node1 ~]# lvcreate -L 1GB redhat -n gfscluster
Error locking on
node 192.168.0.102: Volume group for uuid not found:
7Tx6QN8ayQZ8FoAJZa6uX7qgP81FpwpOfcYKWhOLks0JESQ3snRbJxmXO25hf0Rx
Failed to activate
new LV.
Solution for the above issue :
To resolve the issue, updated lvm cache on all nodes using:
#partprobe
#clvmd -R
Then scanned volumes using
#vgscan
[root@node1 ~]# lvcreate -L 1GB redhat -n gfscluster
Logical
volume "gfscluster" created
[root@node1 ~]# lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
gfscluster
redhat -wi-a----- 1.00g
lv_root vg_node1 -wi-ao---- 7.54g
lv_swap vg_node1 -wi-ao---- 1.97g
[root@node1 ~]# mkfs.gfs2 -plock_nolock -j2 /dev/redhat/gfscluster
This will destroy any data on /dev/redhat/gfscluster.
It appears to contain: symbolic link to `../dm-2'
Are you sure you want to proceed? [y/n] y
Device:
/dev/redhat/gfscluster
Blocksize:
4096
Device Size
1.00 GB (262144 blocks)
Filesystem Size:
1.00 GB (262142 blocks)
Journals:
2
Resource Groups:
4
Locking Protocol:
"lock_nolock"
Lock Table:
""
UUID:
383c06a0-faea-23af-d49d-1c45220e357b
Edit fstab and add
/dev/redhat/gfscluster
/mnt/gfs2 gfs defaults,acl 0 0
#mount -a [Mount the devices on all the cluster nodes]
Check
[root@node1 ~]# df -h
Filesystem
Size Used Avail Use% Mounted on
/dev/mapper/vg_node1-lv_root
7.5G 2.2G 5.2G
30% /
tmpfs
497M 26M 472M
6% /dev/shm
/dev/sda1
485M 34M 426M
8% /boot
/dev/mapper/redhat-gfscluster
1.0G 259M 766M
26% /mnt/gfs2
Check the LUN
[root@iscsi ~]# tgtadm --lld iscsi --op show --mode target
Target 1: iqn.20013-14.com.iscsi:server.target1
System information:
Driver: iscsi
State: ready
I_T nexus information:
LUN information:
LUN: 0
Type: controller
SCSI ID: IET 00010000
SCSI SN: beaf10
Size: 0 MB, Block size: 1
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: null
Backing store path: None
Backing store flags:
LUN: 1
Type: disk
SCSI ID: IET 00010001
SCSI SN: beaf11
Size: 2147 MB, Block size: 512
Online: Yes
Removable media: No
Prevent removal: No
Readonly: No
Backing store type: rdwr
Backing store path: /dev/sdb
Backing store flags:
Account information:
ACL information:
ALL
configure apache to use one or more virtual host on folder on the same storage.
for example, on both nodes, add to the end of /etc/httpd/conf/httpd.conf
#vi /etc/httpd/conf/httpd.conf
ServerAdmin webmaster@mgmt.local
DocumentRoot /data/websites/default
ServerName rhel-cluster.mgmt.local
ErrorLog logs/rhel-cluster_mgmt_local-error_log
CustomLog logs/rhel-cluster_mgmt_local-access_log common
save and exit
create two directories under /data,
#mkdir /data/websites
#mkdir /data/websites/default
create an index file
#vi /data/websites/default/index.html
Cluster Nodes Conf…..
apache set to start at boot time
#chkconfig httpd on
#service httpd start
check in browser to access luci
https://rhel-cluster-node1:8084
1. select the cluster tab.
2. Click Create a New Cluster.
3. At the Cluster Name text box, enter cluster name “rhel-cluster.
Add the node name and password for each cluster node.
4. Click Submit to download, install, cofigure & start cluster software in each node
Add a resource,
choose IP Address and use 192.168.100.100
Create a service named “cluster”, add the resource “IP Address” you had created before,
check “Automatically start this service”
check “Run exclusive”
choose “Recovery policy” as “Relocate”
Save the service.
enable it, and start it on one cluster node.
check cluster configuration file
#cat /etc/cluster/cluster.conf
check shared IP Address
#/sbin/ip addr list
Check the Cluster
shutdown/unplug one host from network & you should be able to see the website still on 192.168.100.100 is reachable.
configure apache to use one or more virtual host on folder on the same storage.
for example, on both nodes, add to the end of /etc/httpd/conf/httpd.conf
#vi /etc/httpd/conf/httpd.conf
ServerAdmin webmaster@mgmt.local
DocumentRoot /data/websites/default
ServerName rhel-cluster.mgmt.local
ErrorLog logs/rhel-cluster_mgmt_local-error_log
CustomLog logs/rhel-cluster_mgmt_local-access_log common
save and exit
create two directories under /data,
#mkdir /data/websites
#mkdir /data/websites/default
create an index file
#vi /data/websites/default/index.html
Cluster Nodes Conf…..
apache set to start at boot time
#chkconfig httpd on
#service httpd start
check in browser to access luci
https://rhel-cluster-node1:8084
1. select the cluster tab.
2. Click Create a New Cluster.
3. At the Cluster Name text box, enter cluster name “rhel-cluster.
Add the node name and password for each cluster node.
4. Click Submit to download, install, cofigure & start cluster software in each node
Add a resource,
choose IP Address and use 192.168.100.100
Create a service named “cluster”, add the resource “IP Address” you had created before,
check “Automatically start this service”
check “Run exclusive”
choose “Recovery policy” as “Relocate”
Save the service.
enable it, and start it on one cluster node.
check cluster configuration file
#cat /etc/cluster/cluster.conf
check shared IP Address
#/sbin/ip addr list
Check the Cluster
shutdown/unplug one host from network & you should be able to see the website still on 192.168.100.100 is reachable.
المنارة تعتبر من شركات المكافحة الناجحة للغاية افضل شركة مكافحة حشرات بالرياض ان شركتنا تقوم بمكافحة الحشرات المنزلة التي تزعجنا تماما بشكل فعلي و مستمر فنحن شركة المنارة نقوم بمكافحة الحشرات المنزلية بافضل الطرق التي تقضي علي الحشرات تماما حيث اننا نقوم بابادة كافة الحشرات المنزلية مثل النمل فنحن افضل شركة لمكافحة النمل الابيض بالرياض كما اننا افضل شركة مكافحة صراصير بالرياض و ايضا افضل شركة لمكافحة القوارض بالرياض شركة المنارة افضل شركة ابادة الحشرات بالرياض نقوم بابادة الحشرات بالكامل مثل القوارض و الفاعي و الثعابين و الحشرات الطائرة بالكامل نحن نستخدم افضل المبيدات المصرح لها بالاستخدام و ايضا نستخدم المبيدات الاصلية ليست المغشوشة شركة مكافحه حشرات بالرياض اذا قمت بالضغط علي الرابط السابق سوف تقوم بزيارة شركة المنارة افضل شركة مكافحة حشرات بالرياض
ReplyDeleteThanks for the installation manual! It's always atrouble for me when I need for EssayCompany some program updates and can't manage them alone.
ReplyDelete