High Availability: Linux Clustering di RHEL6/CentOS6 dengan Luci dan Ricci
RHEL6 / CentOS6
with Luci and Ricci”. as we know that to do clustering can use several methods but for the opportunity this time we learn a little “High Availability – Linux Clustering using Luci and Ricci on OS Linux RHEL6 / CentOS6
” To equalize perception, this time topology information uses Host Server: IP 192.168,100,250, has several VMs as follows:arrif-base: IP 192.168,100,120 (arrif-base)
arrif-vm1: IP 192.168.100.102 (node1)
arrif-vm1: IP 192.168.100.103 (node2)
IP: 192.168.100.121 (virtual IP that is automatically formed by Ricci on the active node)
Where for the storage media to use the hard drive, if you don’t have a storage media, you can start creating an iSCSI target / multipath initiator on RHEL6 / CentOS6 Linux first on the article link.
First, do / configure / etc / hosts or you can use DNS server, configuration / etc / hosts
1 | # vim /etc/hosts |
Insert the following file on all three VM servers:
1 2 3 | 192.168.100.120 arrif-base 192.168.100.102 arrif-vm1 192.168.100.103 arrif-vm2 |
Provide/customize hostname naming on both nodes and servers:
1 2 3 | # vim /etc/sysconfig/network HOSTNAME=arrif-vm1 ganti arrif-vm1 dengan arrif-vm2 pada node2 |
Disable
SELinux
1 2 | # vi /etc/selinux/config SELINUX=disabled |
On both nodes install the following packets:
1 2 | # yum groupinstall "High Availability" "Resilient Storage" # yum install iscsi-initiator-utils |
Install HA Management on only one node, this example is installed on arrif-vm1 (node1)
1 | # yum groupinstall "High Availability Management" |
Iptables / firewall settings for using High Availability Cluster on
RHEL / CentOS
Allow access to cman (Cluster Manager):1 2 3 4 | # iptables -I INPUT -m state --state NEW -m multiport -p udp -s 192.168.100.0/24 -d 192.168.100.0/24 --dports 5404,5405 -j ACCEPT # iptables -I INPUT -m addrtype --dst-type MULTICAST -m state --state NEW -m multiport -p udp -s 192.168.100.0/24 --dports 5404,5405 -j ACCEPT |
Setting firewall dlm (Distributed Lock Manager):
1 2 3 4 | # iptables -I INPUT -m state --state NEW -p tcp -s 192.168.100.0/24 -d 192.168.100.0/24 --dport 21064 -j ACCEPT # iptables -I INPUT -m state --state NEW -p tcp -s 192.168.100.0/24 -d 192.168.100.0/24 --dport 11111 -j ACCEPT |
Setting firewall mod_cluster (Conga remote agent):
1 2 | # iptables -I INPUT -m state --state NEW -p tcp -s 192.168.100.0/24 -d 192.168.100.0/24 --dport 16851 -j ACCEPT |
Setting firewall Luci (Conga User Interface server):
1 2 3 4 5 6 | # iptables -I INPUT -m state --state NEW -p tcp -s 192.168.100.0/24 -d 192.168.100.0/24 --dport 16851 -j ACCEPT # iptables -I INPUT -m state --state NEW -p tcp -s 192.168.100.0/24 -d 192.168.100.0/24 --dport 8084 -j ACCEPT # iptables -I INPUT -m state --state NEW -p udp -s 192.168.100.0/24 -d 192.168.100.0/24 --dport 8084 -j ACCEPT |
Setting firewall igmp (Internet Group Management Protocol):
1 | # iptables -I INPUT -p igmp -j ACCEPT |
Save the iptables / firewall configuration
1 2 | # service iptables save # service iptables restart |
After completing the firewall configuration, set the password to clear on both nodes
1 2 3 | # passwd ricci Changing password for user ricci. New password: |
Run the service cluster automatically when both boot nodes:
1 2 3 4 5 6 | # chkconfig ricci on # service ricci start # chkconfig cman on # chkconfig clvmd on # chkconfig rgmanager on # chkconfig modclusterd on |
Run the web management service automatically when booting (the node that is made as web management):
1 2 3 4 5 | # yum install httpd mod_ssl # chkconfig httpd on # service httpd start # chkconfig luci on # service luci start |
https: // arrif-vm1: 8084
1. Add nodes to both service clustersHomabase ==> Manage Cluster
klik “Create”
Cluster Name: cluster-name
Use the Same Password for All Nodes: yes
Node Name:
arrif-vm1
arrif-vm2
Download Packages: yes
Reboot Nodes Before Joining Cluster: yes
Enable Shared Storage Support: yes
2. In the Resources Tab
Tambah/add service “Resources” IP Address
Klik “Resources” => Add => IP Address
IP Address: 192.168.100.121
Netmask: 24
3. Add/add resource filesystem
Name: webroot
Filesystem Type: ext4
Mount Point : /var/www/html
Device, FS Label, or UUID: /dev/mapper/1IET_00010001p1
4. Tambah Resource http scripts
Add : Scipt
Name: httpd
Full Path to Script File: /etc/init.d/httpd
Next in the “Failover Domain” configuration stage
Click add to add failover
Name: failover-domain
Prioritized: yes
Restricted: yes
set member cluster dan prioritinya
Next, configure “Fence Device”
Because it uses KVM virtualization, use fencing Fence Virt (Multicast Mode) to be able to kill/take over service nodes from a cluster.
Next, configure “Fence Device” which will be used on the service nodes.
On the “Nodes” tab click to add a fence device
Click “Add Fence Method”
Method Name: “FenceMethod”
after forming “FenceMethod” click “Add Fence Instance” and select KVM-Fencing xvm Virtual Machine Fencing
To be able to use fencing in KVM, there are several packages that must be installed “fence-virtd-multicast” on the host computer, so that they can parse the vhost / node in it.
1 2 3 4 | # rpm -qa | grep -i fence [root@arrif-vm1 ~]# rpm -qa | grep -i fence fence-virt-0.2.3-9.el6.x86_64 fence-agents-3.1.5-17.1.el6_3.x86_64 |
so it becomes as follows:
1 2 3 4 5 | [root@arrif-vm1 ~]# rpm -qa | grep -i fence fence-virtd-0.2.3-9.el6.x86_64 fence-virt-0.2.3-9.el6.x86_64 fence-agents-3.1.5-17.1.el6_3.x86_64 fence-virtd-multicast-0.2.3-9.el6.x86_64 |
Create configure fence_virtd on the host computer:
1 2 3 4 5 6 7 8 9 10 11 12 | # fence_virtd -c Parsing of /etc/fence_virt.conf failed. Start from scratch [y/N]? y Module search path [/usr/lib64/fence-virt]: enter Listener module [multicast]: enter Multicast IP Address [225.0.0.12]: enter Multicast IP Port [1229]: enter Interface [br0]: enter Key File [/etc/cluster/fence_xvm.key]: enter Backend module [libvirt]: enter Libvirt URI [qemu:///system]: enter Replace /etc/fence_virt.conf with the above [y/N]? y |
Run the fence_virtd service and set it up to run when running OS
1 2 | # service fence_virtd start # chkconfig fence_virtd on |
copy fence certificate file that is on the host computer to each node
1 2 3 | # scp /etc/cluster/fence_xvm.key arrif-vm1:/etc/cluster/ # scp /etc/cluster/fence_xvm.key arrif-vm2:/etc/cluster/ # clusvcadm -u |
Now we will add a “Service Groups” with the name apache-webservice
Service Name : apache-webserver
Automatically Start This Service : yes
Run Exclusive: yes
Failover Domain : failover-domain
Recovery Policy : Relocate
For other configurations leave the default, and sort by the following steps for adding “Add Resource” to “Service Groups” apache-webserver
:
– IP address = 192.168.100.121/24
– webroot = ext4 / /dev/mapper/1IET_00010001p1 mount to /var/www/html
– httpd = script /etc/init.d/httpd