In this post I would like to share on how to set up a two-node cluster.
First: Configuring the ethernet connection
For every node, we have to configure the network script for the ethernet card, /etc/hosts, /etc/hostname, /etc/resolv.conf, /etc/hosts.allow (and /etc/hosts.deny), and the iptables. The goal is the following:
- Assign private static IPs to every node.
- Assign hostnames to every node.
- Assign a virtual gateway and nameserver
- Allowing communication between the nodes through iptables (if applicable) and hosts.allow.
Let's say we have two nodes and we would like to name them node1 (head node) and node2.
Note: On node1 (chosen as the head node), enp3s0f0 is connected to the internet. So I used enp3s0f1 for the local network.
Sample configuration for node1
$ cat /etc/sysconfig/network-scripts/ifcfg-enp3s0f1
TYPE=Ethernet
BOOTPROTO=none
IPADDR=192.168.1.2 #private IP for node1
NETMASK=255.255.255.0
GATEWAY=192.168.1.1
BROADCAST=192.168.1.255
NETWORK=192.168.1.0
HOSTNAME=node1
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME=enp3s0f1
ONBOOT=yes
NM_CONTROLLED=no
Leave /etc/sysconfig/network untouched.
$ cat /etc/hosts
192.168.1.1 router
192.168.1.2 node1
127.0.0.1 localhost
192.168.1.3 node2
$ cat /etc/hostname
node1
$ cat /etc/resolv.conf
nameserver 192.168.1.1
$ cat /etc/hosts.allow
ALL+
Configuration for node2 is similar.
The following may be needed at the appropriate line in iptables
-A INPUT -s 192.168.1.0/24 -m state --state NEW,ESTABLISHED -j ACCEPT
The following commands may be required for the changes to take effect:
$ service NetworkManager restart
$ service network restart
The following may be needed at the appropriate line in iptables
-A INPUT -s 192.168.1.0/24 -m state --state NEW,ESTABLISHED -j ACCEPT
The following commands may be required for the changes to take effect:
$ service NetworkManager restart
$ service network restart
Second: Set up passwordless ssh between nodes
For every user, do the following:
# On node1, Generates public and private key pair
$ ssh-keygen -t rsa
#Copy the public key pair to node2. This will also create known_hosts on ~/.ssh
$ scp .ssh/id_rsa.pub node2:~/.ssh/authorized_keys
On node2, do the same (generates key pair and copy the public key to node1).
Easier way
On node1, simply do
$ ssh-copy-id node2
On node2, simpy do
$ ssh-copy-id node1
Easier way
On node1, simply do
$ ssh-copy-id node2
On node2, simpy do
$ ssh-copy-id node1
Third: Set up NFS
Suppose I have /apps directory on node1 that I would like to share with the other node. In this case NFS would be handy.
Important files to configure:
- On NFS server: /etc/exports
- On NFS client: /etc/fstab
On NFS server, to export the directory to be shared
$ cat /etc/exports
/apps node2(rw,sync,no_root_squash)
On NFS client, to automate the NFS mount on boot
$ cat /etc/fstab
node1:/apps /apps nfs defaults 0 0
Check whether export is successful on the server side by doing the following on the client
$ showmount -e node1
After exporting on the server side, NFS can be mounted on the client side by
$ mount -t nfs node1:/apps /apps
The following command may be required for the changes to take effect:
$ service nfs restart
No comments:
Post a Comment