How to configure Two Node High Availability Cluster On RHEL/CentOS

Sourabh Dey
10 min readJan 22, 2022

This article will help you learn how to setup/configure a High-Availability (HA) cluster on Linux/Unix based systems. Cluster is nothing but a group of computers (called nodes/members) to work together to execute a task. Basically there are four types of clusters available, which are Storage Cluster, High-availability Cluster, Load-balancing Cluster, and HIGH-Performance Computing Cluster. In production, HA (High-Availability) and LB (Load Balancing) Clusters are the most deployed cluster types in the clustered environment. They offer, uninterrupted availability of services/data as they can be (for eg: web services) to the end-user community. HA Cluster configurations are sometimes grouped into two subsets: (Active-active and Active-passive).

Active-active: Typically you need a minimum of two nodes, both nodes should be running the same service/application actively. This is mainly used to achieve Load Balancing (LB) Cluster to distribute the workloads across the nodes.

Active-passive: It also needs a minimum of two nodes to provide a fully redundant system. Here, the service/application runs only on one node at a time and it is mainly used to achieve the High Availability (HA) Cluster as one node will be active and the other will be a standby (passive).

In our setup, we will be focusing only on High-Availability (Active-passive) also known as Fail-over cluster. One of the biggest achievements by having the nodes in the HA cluster will be tracking each other’s nodes and migrating the service/application to the next node in-case of any failures in the nodes. Also, the faulty node won’t be visible to the clients from outside, but there will be a small service disruption during the migration period. It also maintains the data integrity of the service using HA.

Prerequisites:

Operating System : CentOS Linux / Red Hat
Shared Storage : iSCSI SAN
Floating IP address : For Cluster nodes
package : pcs, fence-agents-all and targetcli

My Lab Set Up:

ISCSI Shared Storage : 192.168.43.99

Node1: 192.168.43.147

Node2: 192.168.43.56

Virtual IP : 192.168.43.228

/etc/hosts Configuration

Set Up storage in ISCSI Storage Server :

Package Installation in Server Site

yum update -y

yum install -y targetcli

Now enter into the targetcli for interactive shell of the iSCSI Server:

Create iSCSI for IQN target:

Create ACLs:

First go to this directory !

Create LUNs under the ISCSI target:

Make sure you are in the luns directory before entering this command

Enable CHAPP Authentication

Sorry for the Weak Password I am not good with the password

Save the Configuration and Exit out :

Add a firewall rule to permit iscsi port 3260 OR disable it

OR

Just Kidding :)

Finally, enable and start the iSCSI target

Set Up High Availability Cluster :

/etc/hosts

Node1:

Node2:

Node Site Configuration :

Install the iscsi-initiator package on both nodes

Install on both the nodes

Check the Iqn Number of the both the nodes

Check the iqn number on both the node Configuration File Path
Node1
Node2

Now go to both the node and add the target shared storage server IQN number .

Node 1:

Save and Exit

Node 2:

Save Exit

Save and restart the iscsid service on both nodes

Run This one by one on both the nodes

Output:

Status is Running

Next, configure CHAP authentication on both nodes (Node1 and Node2)

Add the Authentication on bottom of the file on both the node For Username and Password Please Scroll Up for reference

Now is the time to Discover the iSCSI Shared Storage (LUNs) on both nodes (Node1 and Node2)

sudo iscsiadm — mode discoverydb — type sendtargets — portal 192.168.43.99 — discover

Put the ISCSI Shared Storage IP Address on the highlighted Section
Output Will be like this :

Use the following command to log in to the Target Server:

Fire the command from either of the node
Output

Use the following command to verify the newly added disk on both nodes

node1:

node2:

Use the following command to create a filesystem for the newly added block device (/dev/sdb) to any one of your nodes, either node1 or node2. I will use it in our demo on Node1.

I am using XFS, but you can choose any of file type according to your need
Options for you
Output of the mkfs command

For testing purposes, use the following steps to mount the newly added disk temporarily with /mnt directory and create 3 files named “1, 2, 3”, then use ‘ls’ command to verify these files are placed in /mnt directory and finally unmount the /mnt directory from Node1.

Now, move on to Node2 and run the following command to see if those files created on Node1 are available on Node2.

Files are there

Install and configure Cluster Setup

Enable the High Availability in both the nodes

Cent OS 7.0:

Red Hat 8.0 :

Use the following command to Install cluster Packages (pacemaker) on both nodes (Node1 and Node2)

Fire up this command on both the node and sit back because it will take some time :)

Add the High-Availability in the firewall on both the node :

Now, start the cluster service and enable it for every reboot on both nodes (Node1 and Node2)

Cluster Configuration -: Use the following command to set the password for “hacluster” user on both nodes (Node1 and Node2).

It will set the passwd for the hacluster

Use the following command to authorize the nodes. Execute it to only one of your nodes in the Cluster. In our case, I would prefer to run it on Node1.

Put the username and Passwd : 123456789 it will authorized the nodes

Start and configure Cluster Nodes. Execute the following command to only one of your nodes. In our case, Node1

example_cluster will be your cluster name you can name it anything !
Output:

Enable the cluster service on every reboot

Check the status of both the nodes in the cluster :

Alternative Way:

Setup Fencing

Fencing, also known as STONITH “Shoot The Other Node In The Head”, is one of the important tools in the cluster which can be used to safeguard the data corruption on the shared storage. Fencing plays a vital role when the nodes are not able to talk to each other. This will detach the shared storage access from the faulty node. There are two types available in Fencing: Resource Level Fencing and Node Level Fencing.

For this demo, I am not going to run Fencing (STONITH), as our machines are running in a VMware environment, which doesn’t support it, but for those who are implementing in a production environment please click here to see the entire setup of fencing

Use the following command to disable the STONITH and ignore the quorum policy and check the status of Cluster Properties to ensure both are disabled:

Resources / Cluster Services

Now install the service on both the node

It can be anything

service like (Httpd/Apache, nginx )

Database like (postgres, edb)

Install the service on both the node

Configure / Edit in the configuration file :

Configure on both the node

In order to store Apache files (HTML/CSS) we need to use our centralized storage unit (i.e., iSCSI server). This setup only has to be done in one node. In our case, Node1.

Add the Apache Service into the firewall

Port 80 for HTTP and Port 443 for HTTPS

OR

Just Kidding don’t do that :)

Disable the SELinux configuration:

Create Resources. In this section, we will add three cluster resources: “FileSystem resources named as APACHE_FS”, “Floating IP address resources named as APACHE_VIP”, “Webserver resources named as APACHE_SERV”. Use the following command to add the three resources to the same group.

Add the first resource: Filesystem with the combination of shared storage (iSCSI Server)

Add a second resource: Floating IP address

IP will be the Virtual IP Address scroll up for more reference

Add the third recourse: APACHE_SERV

After the resources and the resource group creation, start the cluster.

Check the pcs status

Now it’s time for checking the Cluster :

Test High-Availability (HA)/Failover Cluster

Virtual IP

The final step in our High-Availability Cluster is to do the Failover test, manually we stop the active node (Node1) and see the status from Node2 and try to access our webpage using the Virtual IP.

As you can see node1 is completely stopped
Start the node2 for checking the fail over
Still Working

Some PCS Cluster Command for managing :

To stop a specific node

Want to know more input and switches :

Now let’s just say you fed up with the command line and want some gui experience then:

Node1 IP Address
Accept the risk and continue
Put the username and password
As you can see our already added cluster

Hope You will find something interesting by reading this Ping Me if you find any errors or any difficulty to understand any concpet

Bye:

--

--

Sourabh Dey

Trying To learn as much as possible about different Techonology. About me I am just a normal Boy who lives in terminal