How to Deploy Nginx Load Balancing on Kubernetes Cluster on Ubuntu 18.04 LTS

Kubernetes is a free and open-source container orchestration system that can be used to deploy and manage container. It was developed by Google and specially designed for autoscaling and automated deployments. Kubernetes can run on any cloud infrastructure and bare metal. Kubernetes enables you to distribute multiple applications across a cluster of nodes. Kubernetes comes with a rich set of features including, Self-healing, Auto-scalability, Load balancing, Batch execution, Horizontal scaling, Service discovery, Storage orchestration and many more.

In this tutorial, we will learn how to setup Nginx load balancing with Kubernetes on Ubuntu 18.04.

Requirements

  • Two servers with Ubuntu 18.04 installed.
  • Minimum 2 GB of RAM installed on each server.
  • A root password is configured on both servers.

Getting Started

First, you will need to update both servers with the latest stable version. You can update them by running the following command:

apt-get update -y
apt-get upgrade -y

Once both servers are updated, restart them to apply all the changes.

By default, Kuberenetes does not support swap memory and will not work if swap is active. So you will need to disable swap memory on both servers.

To disable swap memory temporary run the following command:

swapoff -a

To disable swap memory permanently open /etc/fstab file:

nano /etc/fstab

Comment out the last line:

# /etc/fstab: static file system information.
#
# use 'blkid' to print the universally unique identifier for a
# device; this may be used with uuid= as a more robust way to name devices
# that works even if disks are added and removed. see fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
# / was on /dev/sda1 during installation
# swap was on /dev/sda4 during installation #UUID=65se21r-1d3t-3263-2198-e564c275e156 none swap sw 0 0

Save and close the file. Then, run the following command to apply the configuration changes:

mount -a

Next, you will need to setup hostname resolution on both servers. So, each server can communicate with each other using the hostname.

To do so, open /etc/hosts file using your preferred editor:

nano /etc/hosts

Add the following lines:

192.168.0.103 master
192.168.0.100 slave

Save and close the file, when you are finished. Then, proceed to the next step.

Install Docker And Kubernetes

Next, you will need to install Docker and Kubernetes tool kubelet, kubeadm, and kubectl on both servers.

First, install required packages and add GPG key with the following command:

apt-get install apt-transport-https ca-certificates curl software-properties-common -y
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -

Next, add Docker CE repository on both servers by running the following command:

add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"

Next, update the repository and install Docker CE with the following command:

apt-get update -y
apt-get install docker-ce -y

Once the installation is completed, check the status of Docker CE with the following command:

systemctl status docker

You should see the following output:

? docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2019-07-19 07:05:50 UTC; 1h 24min ago
     Docs: https://docs.docker.com
 Main PID: 3619 (dockerd)
    Tasks: 8
   CGroup: /system.slice/docker.service
           ??3619 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock

Jul 19 07:05:48 master dockerd[3619]: time="2019-07-19T07:05:48.574491681Z" level=warning msg="Your kernel does not support swap memory limit"
Jul 19 07:05:48 master dockerd[3619]: time="2019-07-19T07:05:48.575196691Z" level=warning msg="Your kernel does not support cgroup rt period"
Jul 19 07:05:48 master dockerd[3619]: time="2019-07-19T07:05:48.575733336Z" level=warning msg="Your kernel does not support cgroup rt runtime"
Jul 19 07:05:48 master dockerd[3619]: time="2019-07-19T07:05:48.582517104Z" level=info msg="Loading containers: start."
Jul 19 07:05:49 master dockerd[3619]: time="2019-07-19T07:05:49.391255541Z" level=info msg="Default bridge (docker0) is assigned with an IP add
Jul 19 07:05:49 master dockerd[3619]: time="2019-07-19T07:05:49.681478822Z" level=info msg="Loading containers: done."
Jul 19 07:05:50 master dockerd[3619]: time="2019-07-19T07:05:50.003776717Z" level=info msg="Docker daemon" commit=0dd43dd graphdriver(s)=overla
Jul 19 07:05:50 master dockerd[3619]: time="2019-07-19T07:05:50.009892901Z" level=info msg="Daemon has completed initialization"
Jul 19 07:05:50 master systemd[1]: Started Docker Application Container Engine.
Jul 19 07:05:50 master dockerd[3619]: time="2019-07-19T07:05:50.279284258Z" level=info msg="API listen on /var/run/docker.sock"

Kubernetes packages are not available in the Ubuntu 18.04 default repository. So, you will need to add the Kubernetes repository on both servers.

You can add it with the following commands:

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo 'deb http://apt.kubernetes.io/ kubernetes-xenial main' | tee /etc/apt/sources.list.d/kubernetes.list

Next, update the repository and install Kubernetes packages with the following command:

apt-get install kubelet kubeadm kubectl -y

Once all the packages are installed, you can proceed to configure Master server.

Configure Kubernetes Master Server

First, you will need to initialize your cluster with its private IP address on the Master server:

You can do it with the kubeadm command:

kubeadm init --pod-network-cidr=192.168.0.0/16 --apiserver-advertise-address=192.168.0.103

Once the Cluster initialized successfully, you should see the following output:

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.103:6443 --token zsyq2w.c676bxzjul3upd7u \
    --discovery-token-ca-cert-hash sha256:a720ae35d472162177f6ee39de758a5de40043f53e4a3e00aefd6f9832f3436c 

Next, you will need to configure the kubectl tool on your Master server. You can do it with the following command:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

Next, you will need to deploy a Container Networking Interface (CNI) on your server. Because, the cluster does not have a CNI.

You can deploy the CNI to your cluster with the following command:

kubectl apply -f https://docs.projectcalico.org/v2.6/getting-started/kubernetes/installation/hosted/kubeadm/1.6/calico.yaml

You should see the following output:

configmap/calico-config created
daemonset.extensions/calico-etcd created
service/calico-etcd created
daemonset.extensions/calico-node created
deployment.extensions/calico-kube-controllers created
deployment.extensions/calico-policy-controller created
clusterrolebinding.rbac.authorization.k8s.io/calico-cni-plugin created
clusterrole.rbac.authorization.k8s.io/calico-cni-plugin created
serviceaccount/calico-cni-plugin created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

You can now check your namespaces by running the following command:

kubectl get namespaces

If everything goes fine, you should see the following output:

NAME          STATUS    AGE
default       Active    4h
kube-public   Active    4h
kube-system   Active    4h

Next, verify whether the master node is now running properly with the following command:

kubectl get nodes

You should see the following output:

name          status    roles     age       version
master   Ready     master    12m       v1.15.3

Add Slave to the Kubernetes Cluster

Next, log in to your slave server and run the following command to add the slave to the Kubernetes cluster:

kubeadm join 192.168.0.103:6443 --token zsyq2w.c676bxzjul3upd7u --discovery-token-ca-cert-hash sha256:a720ae35d472162177f6ee39de758a5de40043f53e4a3e00aefd6f9832f3436c

Next, go to the master server and check whether the slave is added to your Kubernetes cluster with the following command:

kubectl get nodes

You should see the following output:

name status roles age version
master ready master 25m v1.15.3
slave ready 2m v1.15.3

Once you are finished, you can proceed to the next step.

Deploy NGINX on the Kubernetes Cluster

Kubernetes cluster is now installed, configured and working properly. It's time to deploy Nginx on the Kubernetes cluster.

Go to the Master server and create an Nginx deployment with the following command:

kubectl create deployment nginx --image=nginx

You can now list the Nginx deployment with the following command:

kubectl get deployments

You should see the following output:

NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           99s

Once the Nginx has been deployed, the application can be exposed with the following command:

kubectl create service nodeport nginx --tcp=80:80

You can now see a new Service and ClusterIP address assigned with the following command:

kubectl get svc

You should see the following output:

NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.152.183.1             443/TCP   15m
nginx        ClusterIP   10.152.183.199           80:32456/TCP    60s

Congratulations! you have successfully deployed Nginx on Kubernetes cluster. You can also add another node to the Kubernetes cluster easily. For more information, refer the Kubernetes official doc at Kubernetes Doc. Feel free to ask me if you have any questions.

Share this page:

2 Comment(s)