There is a new version of this tutorial available for Debian 12 (Bookworm).

High-Availability Storage With GlusterFS On Debian Lenny - Automatic File Replication Across Two Storage Servers

This tutorial shows how to set up a high-availability storage with two storage servers (Debian Lenny) that use GlusterFS. Each storage server will be a mirror of the other storage server, and files will be replicated automatically across both storage servers. The client system (Debian Lenny as well) will be able to access the storage as if it was a local filesystem. GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 servers with SATA-II RAID and Infiniband HBA.

I do not issue any guarantee that this will work for you!

 

1 Preliminary Note

In this tutorial I use three systems, two servers and a client:

  • server1.example.com: IP address 192.168.0.100 (server)
  • server2.example.com: IP address 192.168.0.101 (server)
  • client1.example.com: IP address 192.168.0.102 (client)

All three systems should be able to resolve the other systems' hostnames. If this cannot be done through DNS, you should edit the /etc/hosts file so that it looks as follows on all three systems:

vi /etc/hosts
127.0.0.1       localhost.localdomain   localhost
192.168.0.100   server1.example.com     server1
192.168.0.101   server2.example.com     server2
192.168.0.102   client1.example.com     client1

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

(It is also possible to use IP addresses instead of hostnames in the following setup. If you prefer to use IP addresses, you don't have to care about whether the hostnames can be resolved or not.)

 

2 Setting Up The GlusterFS Servers

server1.example.com/server2.example.com:

GlusterFS isn't available as a Debian package for Debian Lenny, therefore we have to build it ourselves. First we install the prerequisites:

aptitude install sshfs build-essential flex bison byacc libdb4.6 libdb4.6-dev

Then we download the latest GlusterFS release from http://www.gluster.org/download.php and build it as follows:

cd /tmp
wget http://ftp.gluster.com/pub/gluster/glusterfs/2.0/LATEST/glusterfs-2.0.1.tar.gz
tar xvfz glusterfs-2.0.1.tar.gz
cd glusterfs-2.0.1
./configure --prefix=/usr > /dev/null
server1:/tmp/glusterfs-2.0.1# ./configure --prefix=/usr > /dev/null

GlusterFS configure summary
===========================
FUSE client        : no
Infiniband verbs   : no
epoll IO multiplex : yes
Berkeley-DB        : yes
libglusterfsclient : yes
mod_glusterfs      : no ()
argp-standalone    : no

server1:/tmp/glusterfs-2.0.1#
make && make install
ldconfig

The command

glusterfs --version

should now show the GlusterFS version that you've just compiled (2.0.1 in this case):

server1:/tmp/glusterfs-2.0.1# glusterfs --version
glusterfs 2.0.1 built on May 29 2009 17:23:10
Repository revision: 5c1d9108c1529a1155963cb1911f8870a674ab5b
Copyright (c) 2006-2009 Z RESEARCH Inc. <http://www.zresearch.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU General Public License.
server1:/tmp/glusterfs-2.0.1#

Next we create a few directories:

mkdir /data/
mkdir /data/export
mkdir /data/export-ns
mkdir /etc/glusterfs

Now we create the GlusterFS server configuration file /etc/glusterfs/glusterfsd.vol which defines which directory will be exported (/data/export) and what client is allowed to connect (192.168.0.102 = client1.example.com):

vi /etc/glusterfs/glusterfsd.vol
volume posix
  type storage/posix
  option directory /data/export
end-volume

volume locks
  type features/locks
  subvolumes posix
end-volume

volume brick
  type performance/io-threads
  option thread-count 8
  subvolumes locks
end-volume

volume server
  type protocol/server
  option transport-type tcp
  option auth.addr.brick.allow 192.168.0.102
  subvolumes brick
end-volume

Please note that it is possible to use wildcards for the IP addresses (like 192.168.*) and that you can specify multiple IP addresses separated by comma (e.g. 192.168.0.102,192.168.0.103).

Afterwards we create the system startup links for the glusterfsd init script...

update-rc.d glusterfsd defaults

... and start glusterfsd:

/etc/init.d/glusterfsd start
Share this page:

2 Comment(s)