This is a "copy & paste" HowTo! The easiest way to follow this tutorial is to use a command line client/SSH client (like PuTTY for Windows) and simply copy and paste the commands (except where you have to provide own information like IP addresses, hostnames, passwords,...). This helps to avoid typos.

Setting Up A Highly Available NFS Server

Version 1.0
Author: Falko Timme
Last edited: 03/07/2006

In this tutorial I will describe how to set up a highly available NFS server that can be used as storage solution for other high-availability services like, for example, a cluster of web servers that are being loadbalanced. If you have a web server cluster with two or more nodes that serve the same web site(s), than these nodes must access the same pool of data so that every node serves the same data, no matter if the loadbalancer directs the user to node 1 or node n. This can be achieved with an NFS share on an NFS server that all web server nodes (the NFS clients) can access.

As we do not want the NFS server to become another "Single Point of Failure", we have to make it highly available. In fact, in this tutorial I will create two NFS servers that mirror their data to each other in realtime using DRBD and that monitor each other using heartbeat, and if one NFS server fails, the other takes over silently. To the outside (e.g. the web server nodes) these two NFS servers will appear as a single NFS server.

In this setup I will use Debian Sarge (3.1) for the two NFS servers as well as for the NFS client (which represents a node of the web server cluster).

I want to say first that this is not the only way of setting up such a system. There are many ways of achieving this goal but this is the way I take. I do not issue any guarantee that this will work for you!

1 My Setup

In this document I use the following systems:

  • NFS server 1: server1.example.com, IP address: 192.168.0.172; I will refer to this one as server1.
  • NFS server 2: server2.example.com, IP address: 192.168.0.173; I will refer to this one as server2.
  • Virtual IP address: I use 192.168.0.174 as the virtual IP address that represents the NFS cluster to the outside.
  • NFS client (e.g. a node from the web server cluster): client.example.com, IP address: 192.168.0.100; I will refer to the NFS client as client.
  • The /data directory will be mirrored by DRBD between server1 and server2. It will contain the NFS share /data/export.

2 Basic Installation Of server1 and server2

First we set up two basic Debian systems for server1 and server2. You can do it as outlined on the first two pages of this tutorial: https://www.howtoforge.com/perfect_setup_debian_sarge. As hostname, you enter server1 and server2 respectively, and as domain you enter example.com.

Regarding the partitioning, I use the following partition scheme:

/dev/sda1 -- 100 MB /boot (primary, ext3, Bootable flag: on)
/dev/sda5 -- 5000 MB / (logical, ext3)
/dev/sda6 -- 1000 MB swap (logical)
/dev/sda7 -- 150 MB unmounted (logical, ext3)
(will contain DRBD's meta data)
/dev/sda8 -- 26 GB unmounted (logical, ext3)
(will contain the /data directory)

You can vary the sizes of the partitions depending on your hard disk size, and the names of your partition might also vary, depending on your hardware (e.g. you might have /dev/hda1 instead of /dev/sda1 and so on). However, it is important that /dev/sda7 has a little more than 128 MB because we will use this partition for DRBD's meta data which uses 128 MB. Also, make sure /dev/sda7 as well as /dev/sda8 are identical in size on server1 and server2, and please do not mount them (when the installer asks you:

No mount point is assigned for the ext3 file system in partition #7 of SCSI1 (0,0,0) (sda).
Do you want to return to the partitioning menu?

please answer No)! /dev/sda8 is going to be our data partition (i.e., our NFS share).

After the basic installation make sure that you give server1 and server2 static IP addresses (server1: 192.168.0.172, server2: 192.168.0.173), as described at the beginning of https://www.howtoforge.com/perfect_setup_debian_sarge_p3).

Afterwards, you should check /etc/fstab on both systems. Mine looks like this on both systems:

# /etc/fstab: static file system information.
#
# proc /proc proc defaults 0 0 /dev/sda5 / ext3 defaults,errors=remount-ro 0 1 /dev/sda1 /boot ext3 defaults 0 2 /dev/sda6 none swap sw 0 0 /dev/hdc /media/cdrom0 iso9660 ro,user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0

If you find that yours looks like this, for example:

# /etc/fstab: static file system information.
#
# proc /proc proc defaults 0 0 /dev/hda5 / ext3 defaults,errors=remount-ro 0 1 /dev/hda1 /boot ext3 defaults 0 2 /dev/hda6 none swap sw 0 0 /dev/hdc /media/cdrom0 iso9660 ro,user,noauto 0 0 /dev/fd0 /media/floppy0 auto rw,user,noauto 0 0

then please make sure you use /dev/hda instead of /dev/sda in the following configuration files. Also make sure that /dev/sda7 (or /dev/hda7) and /dev/sda8 (or /dev/hda8...) are not listed in /etc/fstab!

3 Synchronize System Time

It's important that both server1 and server2 have the same system time. Therefore we install an NTP client on both:

server1/server2:

apt-get install ntp ntpdate

Afterwards you can check that both have the same time by running

server1/server2:

date

Share this page:

8 Comment(s)