Table of Contents

How To Setup And Configure MySQL Percona Xtradb Cluster

With our websites increasingly richer with data, and apps that reach millions of users overnight, it often ain’t enough anymore to have a single database server handling all the traffic. Instead, using clusters of database servers becomes a viable variant, especially with virtual machine nodes becoming available for as little as 2.5$ a month for a single node.

Configuration of such clusters ain’t exactly a walk in the park though, and aim of this tutorial is to change that, well as much as possible at least...

In this tutorial, we will install and configure a cluster of database servers, running Percona XtraDB (Percona is a fully compatible, drop-in replacement for MySQL), version 5.7, and HAProxy high availability load balancing proxy server which will handle all the traffic directed to the database nodes.

What you will need for this, beside a little bit of experience with Linux operating system and some basics with networking, are four virtual machine nodes, which can be rented on Hetzner or DigitalOcean for as little as few bucks a month.

All nodes have to have an extra network adapter, for communication/replication between the nodes, and separation of nodes from the Internet for security reasons.

Create Virtual Machine Nodes

So, we will begin with creating a new virtual machine on Hetzner or Digital Ocean. Choose the smallest, least expensive node, with 2GM of RAM memory and a single vCPU, pick Debian 10 as operating system, and add your private key for authentication (this is very important, password authentication poses a great security risk today, which can be easily avoided by using private keys).

If you don’t have a private key, you can create one with the ssh-kegen application, built-in into every Linux box. Key will be automatically saved to the /home/your-account/.ssh directory, print it on the screen with cat command, and paste it/add it to the Hetzner account backend.

When creating a new virtual machine, do not forget to add an extra private network interface, which Digital Ocean or Hetzner will automatically add on the 10.0.0.0/24 subnet.

Once the virtual machine is up and running, log into it and install the nano editor, which is simplest to use for beginners, so let’s begin with:

Install Nano Editor On Unix

apt-get install nano

Once nano is installed, issue the following command:

apt-get -y update && apt-get -y full-upgrade

Install and Configure Percona XtraDB

Once the process is complete, reboot the machine and log in again.

Now we can continue with Percona XtraDB installation. In order for our nodes to be able to communicate, we will install, configure and enable the firewall:

apt-get install ufw

Once installed, add the following rules to it:

ufw default deny incoming
ufw default allow outgoing
ufw allow 2222/tcp
ufw allow from 10.0.0.0/24

What we did here is the following:

  • With first line, we forbid all incoming traffic to the node.
  • With second line, we allowed all outgoing traffic from the node.
  • With the third line, we “punched a hole” in our wall we built on the first line, allowing us to communicate with the node over SSH, which is secure enough due to only us having the private encryption key.
  • With fourth line, we allowed all traffic on the private subnet, between the nodes themselves, which is fully separated from the public Internet and poses no risk whatsoever.
  • This way, nodes will be able to synchronize between themselves and communicate on ports 873, 3306, 4444, 4567, 4568 and 9200.

In case you already had MySQL installed, you might have problems with the apparmor, and for that reason make sure its disabled by typing in:

apt-get remove apparmor

Now when we took care of that, we add a few prerequisites by issuing:

apt-get install -y wget gnupg2 lsb-release

Now download the repo:

wget https://repo.percona.com/apt/percona-release_latest.generic_all.deb

And install it with:

dpkg -i percona-release_latest.generic_all.deb

We will now use percona’s scripts to add the repo:

percona-release enable pxc-80 release

Finish this with updating the apt with:

apt-get update 

Now let’s search for the actual package:

apt search percona-xtradb-cluster
root@testing:~# apt search percona-xtradb-cluster


Sorting... Done
Full Text Search... Done
percona-xtradb-cluster-5.6-dbg/unknown 5.6.46-28.38-1.bionic amd64
 Debugging package for Percona XtraDB Cluster
percona-xtradb-cluster-5.7-dbg/unknown 5.7.28-31.41-1.bionic amd64
 Debugging package for Percona XtraDB Cluster
percona-xtradb-cluster-56/unknown 5.6.46-28.38-1.bionic amd64
 Percona XtraDB Cluster with Galera
percona-xtradb-cluster-57/unknown 5.7.28-31.41-1.bionic amd64
 Percona XtraDB Cluster with Galera
percona-xtradb-cluster-client-5.6/unknown 5.6.46-28.38-1.bionic amd64
 Percona XtraDB Cluster database client binaries
percona-xtradb-cluster-client-5.7/unknown 5.7.28-31.41-1.bionic amd64
 Percona XtraDB Cluster database client binaries
percona-xtradb-cluster-common-5.6/unknown 5.6.46-28.38-1.bionic amd64
 Percona XtraDB Cluster database common files (e.g. /etc/mysql/my.cnf)
percona-xtradb-cluster-common-5.7/unknown 5.7.28-31.41-1.bionic amd64
 Percona XtraDB Cluster database common files (e.g. /etc/mysql/my.cnf)
percona-xtradb-cluster-full-56/unknown 5.6.46-28.38-1.bionic amd64
 Percona XtraDB Cluster with Galera
percona-xtradb-cluster-full-57/unknown 5.7.28-31.41-1.bionic amd64
 Percona XtraDB Cluster with Galera
percona-xtradb-cluster-garbd-5.7/unknown 5.7.28-31.41-1.bionic amd64
 Garbd components of Percona XtraDB Cluster
percona-xtradb-cluster-garbd-debug-5.7/unknown 5.7.28-31.41-1.bionic amd64
 Debugging package for Percona XtraDB Cluster Garbd.

As can be seen, percona-xtradb-cluster-full-57 is the package, we are looking for, so we will now install it with:

apt-get install percona-xtradb-cluster-full-57

Installation script will ask for desired root password here, so please pick an enough secure one here, containing uppercase and lowercase characters, numbers and symbols.

Once the installation is done, it is time for us to configure the wsrep config file, where we will add the addresses of our nodes:

nano /etc/mysql/percona-xtradb-cluster.conf.d/wsrep.cnf 

In this file, look for the line with:

wsrep_cluster_address=gcomm://

Uncomment it (if commented) and add the addresses of our nodes, so that the line looks like this:

wsrep_cluster_address=gcomm://10.0.0.2,10.0.0.3,10.0.0.4

Off course, you will use the addresses your nodes received from the system. You can check them with a simple command:

Ifconfig

Now look for the line that begins with:

#wsrep_node_address=192.168.70.63

Un-comment it and change the address to the IP of the current node, so that it looks like this:

wsrep_node_address=10.0.0.2

And change the host name as well on the line:

wsrep_node_name=pxc-cluster-node-1

Also, change the name of our cluster, on the line that says:

wsrep_cluster_name=pxc-cluster

And change the following line, choosing appropriate password:

wsrep_sst_auth=sstuser:passw0rd

Once done, change these parameters on other two nodes as well, setting appropriately the node addresses and host names.

Once we are done with configuration, it is time to bootstrap our cluster, which is done just once, on our first node, with the following command:

/etc/init.d/mysql bootstrap-pxc

Once bootstrapped and started, we need to add SST user to our database, which can be done after we log into our MySQL server with:

mysql -u root -p

Type in the root MySQL password you chose during installation, and once logged in issue the following:

CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 'passw0rd';

While changing the password (in bold) to the one we used in our configuration file, on line:

wsrep_sst_auth=sstuser:passw0rd

Also, issue the following command:

GRANT RELOAD, LOCK TABLES, PROCESS, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';

And then flush privileges by:

FLUSH PRIVILEGES;
exit; 

Now we can proceed with other two nodes, where we will run a simple:

/etc/init.d/mysql start

If everything went alright, we should now be able to check our cluster, by again logging to mysql server with: mysql -u root -p and issuing the following command:

mysql> show status like 'wsrep_cluster%';

Output should look like something like this:

mysql> show status like 'wsrep_cluster%';
+--------------------------+--------------------------------------+
| Variable_name               | Value                  |
+--------------------------+--------------------------------------+
| wsrep_cluster_weight   |       3                    |
| wsrep_cluster_conf_id  |       3                    |
| wsrep_cluster_size        |       3                    |
| wsrep_cluster_status    | Primary               |
+--------------------------+--------------------------------------+
5 rows in set (0.00 sec)

As we can see, our cluster numbers three nodes, which says everything goes according to plan.

We will now verify the replication, so while still logged in to MySQL, create a new database there with the following command:

create database foo;

Now log in to MySQL on another node, and list the database there with:

show databases;

Output should now show us:

mysql> show databases;
+---------------------------------+
|          Database          |
+---------------------------------+
| information_schema |
|                foo               |
|               mysql           |
|                 sys              |
+---------------------------------+
4 rows in set (0.00 sec)

You can try and create a new database here as well:

create database bar;

And list the databases on the other node again, which should show us that now we have both foo and bar in our databases list:

mysql> show databases;
+---------------------------------+
|          Database          |
+---------------------------------+
| information_schema |
|                bar                |
|                foo               |
|               mysql           |
|                 sys              |
+---------------------------------+
4 rows in set (0.00 sec)

We shall now enable remote access to the databases with following commands, issued while still logged in into MySQL server:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'YOUR-ROOT-DB-PASSWORD' WITH GRANT OPTION;
GRANT ALL ON *.* to root@'%' IDENTIFIED BY 'YOUR-ROOT-DB-PASSWORD';
GRANT ALL PRIVILEGES ON *.* TO root@'localhost' IDENTIFIED BY 'YOUR-ROOT-DB-PASSWORD';
GRANT ALL PRIVILEGES ON *.* TO root@'127.0.0.1' IDENTIFIED BY 'YOUR-ROOT-DB-PASSWORD';
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'YOUR-ROOT-DB-PASSWORD' WITH GRANT OPTION;

Install MySQL load balancer

Now when our cluster is working correctly, it is time for us to set up the load balancer, which will check the nodes health and redirect the traffic appropriately.

We will start by installing xinetd with:

apt-get install xinetd

Once installed, edit the services list by issuing:

nano /etc/services

Scroll down to lines with ports in 9000 range and find the line that says:

bacula-sd      9103/udp

Add a new line bellow it and add the following there:

mysqlchk       9200/tcp                       # mysqlchk

Save the file, and restart the xinetd daemon with:

services xinetd restart

You will need to do this on all three nodes.

Once you are done, deploy a new virtual machine, update the system and install nano as we did on other nodes, and then install the haproxy package with:

Install Haproxy

apt-get install haproxy

When HAProxy is done installing, we need to set its configuration file according to our needs, which we will do with:

nano /etc/haproxy/haproxy.cfg

Delete all there (or just rename the file and create a new one) and add the following:

global
 log 127.0.0.1 local0
 log 127.0.0.1 local1 notice
 maxconn 4096
 user haproxy
 group haproxy
 debug
 #quiet
 daemon

defaults
 log global
 mode http
 option tcplog
 option dontlognull
 retries 3
 option redispatch
 maxconn 2000
 contimeout 5000
 clitimeout 50000
 srvtimeout 50000

frontend pxc-front
 bind 0.0.0.0:3307
 mode tcp
 default_backend pxc-back

frontend stats-front
 bind 0.0.0.0:22002
 mode http
 default_backend stats-back

backend pxc-back
 mode tcp
 balance leastconn
 option httpchk
 server database1 10.0.0.2:3306 check port 9200 inter 12000 rise 3 fall 3
 server database2 10.0.0.3:3306 check port 9200 inter 12000 rise 3 fall 3
 server database3 10.0.0.4:3306 check port 9200 inter 12000 rise 3 fall 3

backend stats-back
 mode http
 balance roundrobin
 stats uri /haproxy/stats

Change the bold lines with your own nodes addresses, save the file with CTRL+X and restart the haproxy with:

service haproxy restart

See if you can reach the nodes from the load balancer, by connecting with telnet to port 9200 (which is the port on which the load balancer checks the nodes health):

telnet 10.0.0.2 9200

This command should show you the clustercheck scripts output:

root@database3:~# clustercheck 
HTTP/1.1 200 OK
Content-Type: text/plain
Connection: close
Content-Length: 40
Percona XtraDB Cluster Node is synced.

Only thing that’s left now is to sort out the firewall on this node, with following commands:

ufw default deny incoming
ufw default allow outgoing
ufw allow 2222/tcp
ufw allow 3307/tcp
ufw allow from 10.0.0.0/24

Reload and enable the firewall with

ufw reload
ufw enable

And VOILA! Believe it or not, you now have a working High Availability MySQL database cluster, working on three nodes, with a HAProxy load balancer taking care of the load, checking nodes health, directing traffic to appropriate place! Happy querying. :-)

Related Posts