Introduction

In this lab we will build out a small Kubernetes cluster using kubeadm. This will also include a test Wordpress app and an ingress controller. The lab will consist of a bastion VM, 3 control plane nodes and 3 worker nodes.

This lab assumes that libvirt is setup and includes DNS resolution of host names from within the subnet and that Ansible is available

Initial Cluster Build

Prerequisite

  1. Clone the lab repo from GitHub with git clone --recursive https://github.com/greeninja/kvm-kube-kubeadm-lab.git
  2. Copy the inventory.yaml.example file to inventory.yaml
  3. Set things like the ssh_key and ssh_password in inventory.yaml

Setup VMs

  • Run the setup.yaml playbook in the lab-setup/playbooks directory once the inventory.yaml file has been completed.

ansible-playbook -i inventory.yaml lab-setup/playbooks/setup.yaml
  • The result should look something like this:

Id Name State

---

57 kube-master01.kubernetes.lab running
58 bastion.kubernetes.lab running
59 kube-master02.kubernetes.lab running
60 kube-node01.kubernetes.lab running
61 kube-master03.kubernetes.lab running
62 kube-node02.kubernetes.lab running
63 kube-node03.kubernetes.lab running
103 lb01.kubernetes.lab running

Verify DNS

  • From the bastion (10.44.60.5), ensure that all the other nodes resolve in DNS

for i in kube-master{01..03} kube-node{01..03}; do
echo -n "$i => "; dig +short $i;
done
  • Example output
    
      kube-master01 => 10.44.60.184
      kube-master02 => 10.44.60.153
      kube-master03 => 10.44.60.161
      kube-node01 => 10.44.60.175
      kube-node02 => 10.44.60.152
      kube-node03 => 10.44.60.185
      

Setup Kubeadm

Setup the Kubernetes Nodes

With the inventory file that was created to build the lab, the quick start playbook can be run. Alternatively, if the bastion node will be used to run Ansible an inventory file like this will suffice.



kube-masters:
hosts:
kube-master01:
ansible_host: 10.44.60.36
kube-master02:
ansible_host: 10.44.60.56
kube-master03:
ansible_host: 10.44.60.28

kube-nodes:
hosts:
kube-node01:
ansible_host: 10.44.60.91
kube-node02:
ansible_host: 10.44.60.55
kube-node03:
ansible_host: 10.44.60.27

The quick start Ansible playbook is included in the GitHub repo. This can be run with ansible-playbook -i inventory.yaml quickstart/setup.yaml

If the quickstart playbook is used, continue from here

Setup Ansible on the Bastion node

If the Bastion node won’t be used to run Ansible, these steps can be skipped. Continue from here

  • Add EPEL repository

yum install -y epel-release
  • Install Ansible

yum install -y ansible

Ansible is Setup

  • Using this inventory, install Podman

ansible -i inventory.yaml all -m package -a "name=podman state=installed"
  • Disable SELinux

ansible -i inventory.yaml all -m selinux -a "state=disabled"
  • Add the Kubernetes repo to the machines

ansible -i inventory.yaml all -m yum_repository -a \
 "name=kubernetes \
 baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch \
 file=kubernetes \
 enabled=yes \
 gpgcheck=yes \
 repo_gpgcheck=yes \
 gpgkey='https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg' \
 description=kube-repo"
  • Install kubelet kubeadm and kubectl on the Kubernetes nodes

ansible -i inventory.yaml all -m package -a "name=kubeadm,kubelet,kubectl state=installed"
  • Start Kubelet

ansible -i inventory.yaml all -m service -a "name=kubelet state=started enabled=true"

Setup the Load Balancer

For this lab, it will be a simple NGINX TCP load balancer.

This is setup by the quickstart/setup.yaml playbook.

  • Install EPEL

ansible -i inventory.yaml load_balancers -m package -a "name=epel-release state=installed"
  • Install NGINX

ansible -i inventory.yaml load_balancers -m package -a "name=nginx state=installed"
  • Setup the nginx.conf to act as a proxy to the 3 masters.


# Setup with ansible

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/\*.conf;

events {
worker_connections 1024;  
}

stream {
upstream kube_api {
least_conn;
server kube-master01:6443 max_fails=2 fail_timeout=10s;
server kube-master02:6443 max_fails=2 fail_timeout=10s;
server kube-master03:6443 max_fails=2 fail_timeout=10s;
}

server {
listen 6443;
proxy_pass kube_api;
}
}

http {
server {
listen 8080 default*server;
server_name *;
root /usr/share/nginx/html;
include /etc/nginx/default.d/\*.conf;
location / {
}

    location /nginx_status {
      stub_status;
      allow all;
    }

    error_page 404 /404.html;
    location = /40x.html {
    }

}
}

Setup Kubernetes

Initial Control Plane

From the first control plane node kube-master01

  • Initialise the control plane

kubeadm init \
 --control-plane-endpoint "lb01.kubernetes.lab:6443" \
 --upload-certs \
 --pod-network-cidr=10.244.0.0/16
  • Example output
    
      Your Kubernetes control-plane has initialized successfully!
    
    To start using your cluster, you need to run the following as a regular user:
    
    mkdir -p $HOME/.kube
      sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
      sudo chown $(id -u):$(id -g) $HOME/.kube/config
    
    Alternatively, if you are the root user, you can run:
    
    export KUBECONFIG=/etc/kubernetes/admin.conf
    
    You should now deploy a pod network to the cluster.
    Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    https://kubernetes.io/docs/concepts/cluster-administration/addons/
    
    You can now join any number of the control-plane node running the following command on each as root:
    
    kubeadm join lb01.kubernetes.lab:6443 --token 1o3clk.vt2g8ix8lnqpfdpe \
     --discovery-token-ca-cert-hash sha256:015ab1bb3c83d23d0810303b40dfeb8e72977129c614290d61425b5af932a452 \
     --control-plane --certificate-key 72102764e59b60d471e5d74244bb279ec17995c486d91261f4864e306aad8b92
    
    Please note that the certificate-key gives access to cluster sensitive data, keep it secret!
    As a safeguard, uploaded-certs will be deleted in two hours; If necessary, you can use
    "kubeadm init phase upload-certs --upload-certs" to reload certs afterward.
    
    Then you can join any number of worker nodes by running the following on each as root:
    
    kubeadm join lb01.kubernetes.lab:6443 --token 1o3clk.vt2g8ix8lnqpfdpe \
     --discovery-token-ca-cert-hash sha256:015ab1bb3c83d23d0810303b40dfeb8e72977129c614290d61425b5af932a452
    

Keep a note of the join commands as these will be required later to add the remaining nodes in the cluster

Bastion

  • Set up the Bastion node with the Kube config

mkdir ~/.kube
  • Copy the kube config from the first control plane node

scp kube-master01:/etc/kubernetes/admin.conf .kube/config
  • Confirm kubectl works from the Bastion

kubectl get nodes

Flannel Pod Network

  • Example output

    
      NAME STATUS ROLES AGE VERSION
      kube-master01.kubernetes.lab NotReady control-plane,master 15m v1.20.2
      

  • Install Flannel as the Pod Network


kubectl apply -f https://github.com/coreos/flannel/raw/master/Documentation/kube-flannel.yml
  • Wait until all pods are now running

kubectl get pods --all-namespaces
  • Example output
    
      NAMESPACE NAME READY STATUS RESTARTS AGE
      kube-system coredns-74ff55c5b-cnx5x 1/1 Running 0 3m56s
      kube-system coredns-74ff55c5b-k5pkb 1/1 Running 0 3m56s
      kube-system etcd-kube-master01.kubernetes.lab 1/1 Running 0 4m4s
      kube-system kube-apiserver-kube-master01.kubernetes.lab 1/1 Running 0 4m4s
      kube-system kube-controller-manager-kube-master01.kubernetes.lab 1/1 Running 0 4m4s
      kube-system kube-flannel-ds-vctqm 1/1 Running 0 71s
      kube-system kube-proxy-5f86b 1/1 Running 0 3m56s
      kube-system kube-scheduler-kube-master01.kubernetes.lab 1/1 Running 0 4m4s
      

Extra Control Plane Nodes


kubeadm join lb01.kubernetes.lab:6443 \
 --token qp81u4.42zph1nkq54xb8iw \
 --discovery-token-ca-cert-hash sha256:3c95d692e25df8167e2c644cbd58aee282b32d4535957ac53daf3b7277984eeb \
 --control-plane --certificate-key 1fe3ad3aa4f18b3a1618279a1408223234472577d911c60f3e6f30a00f65ceff
  • Example
  • Example of kubectl get nodes after all Control plane nodes have been added

NAME STATUS ROLES AGE VERSION
kube-master01.kubernetes.lab Ready control-plane,master 26m v1.20.2
kube-master02.kubernetes.lab Ready control-plane,master 16m v1.20.2
kube-master03.kubernetes.lab Ready control-plane,master 6m31s v1.20.2

Add Worker Nodes


kubeadm join lb01.kubernetes.lab:6443 \
 --token qp81u4.42zph1nkq54xb8iw \
 --discovery-token-ca-cert-hash sha256:3c95d692e25df8167e2c644cbd58aee282b32d4535957ac53daf3b7277984eeb
  • The cluster should now have 3 Control plane nodes and 3 worker nodes

NAME STATUS ROLES AGE VERSION
kube-master01.kubernetes.lab Ready control-plane,master 30m v1.20.2
kube-master02.kubernetes.lab Ready control-plane,master 20m v1.20.2
kube-master03.kubernetes.lab Ready control-plane,master 11m v1.20.2
kube-node01.kubernetes.lab Ready <none> 91s v1.20.2
kube-node02.kubernetes.lab Ready <none> 64s v1.20.2
kube-node03.kubernetes.lab Ready <none> 47s v1.20.2

HAProxy Ingress Controller

This lab will be using the HAProxy ingress controller.

To deploy HAProxy to the cluster, apply the deploy yaml from GitHub on the Bastion node.


kubectl apply -f https://gist.githubusercontent.com/greeninja/375b23fb504b8bbfb855347943de553c/raw/f72a3bb2d60d10c93c0dd0a07cfbdd212f3b1ee4/centos-7-haproxy-ingress-controller.yaml
  • Check that there are 3 haproxy-ingress pods running

kubectl -n haproxy-controller get pods -o wide
  • Example output
    
      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
      haproxy-ingress-587b77b44f-cbwzc 1/1 Running 0 2m20s 10.44.60.152 kube-node01.kubernetes.lab <none> <none>
      haproxy-ingress-587b77b44f-d5s6k 1/1 Running 0 41s 10.44.60.156 kube-node03.kubernetes.lab <none> <none>
      haproxy-ingress-587b77b44f-hzblt 1/1 Running 0 41s 10.44.60.200 kube-node02.kubernetes.lab <none> <none>
      ingress-default-backend-78f5cc7d4c-8hbbg 1/1 Running 0 2m20s 10.244.5.6 kube-node03.kubernetes.lab <none> <none>
      

Wordpress Test

Longhorn Storage Class

This section will create an example Wordpress deployment inside the Kubernetes cluster. As Wordpress also requires storage for both the web app and database, we will first deploy a storage service. As it came up in conversation the other day, this will be Longhorn

Ensure iscsi-initiator-utils is installed on the nodes else this will hang. It is installed as part of the quick start playbook


kubectl apply -f https://raw.githubusercontent.com/longhorn/longhorn/v0.8.0/deploy/longhorn.yaml
  • Create an ingress object for the UI

kubectl -n longhorn-system apply -f https://raw.githubusercontent.com/greeninja/kvm-kube-kubeadm-lab/master/longhorn-ingress.yaml
  • Check that the ingress object has been created

kubectl -n longhorn-system get ingress longhorn-ingress
  • Example output

    
      NAME CLASS HOSTS ADDRESS PORTS AGE
      longhorn-ingress <none> longhorn.kubernetes.lab 80 90m
      

  • Providing proxies / DNS is available, the web GUI should now be available at longhorn.kubernetes.lab

longhorn GUI
Longhorn GUI
  • Check the storage class exists

kubectl get storageclass
  • Example output

NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
longhorn driver.longhorn.io Delete Immediate true 106m
  • Add the storage class to the cluster if it doesn’t exist

kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/v0.8.0/examples/storageclass.yaml
  • Set Longhorn as the default storage class

kubectl patch storageclass longhorn -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Wordpress Deployment

  • Create the Wordpress namespace

kubectl apply -f https://raw.githubusercontent.com/greeninja/kvm-kube-kubeadm-lab/master/wordpress-namespace.yaml
  • Example output

    
      namespace/wordpress created
      

  • Create the database required for Wordpress


kubectl -n wordpress apply -f https://raw.githubusercontent.com/greeninja/kvm-kube-kubeadm-lab/master/mysql-deployment.yaml
  • Wait until the database pod is Running

kubectl -n wordpress get pods
  • Example output

    
      NAME READY STATUS RESTARTS AGE
      wordpress-mysql-8c76b9544-dtqbz 1/1 Running 0 64s
      

  • Create the Wordpress pod


kubectl -n wordpress apply -f https://raw.githubusercontent.com/greeninja/kvm-kube-kubeadm-lab/master/wordpress-deployment.yaml
  • Wait until the Wordpress pod is Running

kubectl -n wordpress get pods
  • Example output
    
      NAME READY STATUS RESTARTS AGE
      wordpress-79c9564674-54p98 1/1 Running 0 70s
      wordpress-mysql-8c76b9544-dtqbz 1/1 Running 0 3m5s
      

If you set up the Longhorn GUI, then at this point you should see the two Volumes in use in the dashboard

  • Create an ingress controller to Wordpress

kubectl -n wordpress apply -f https://raw.githubusercontent.com/greeninja/kvm-kube-kubeadm-lab/master/wordpress-ingress.yaml
  • Check the ingress object was created

kubectl -n wordpress get ingress
  • Example output

    
      NAME CLASS HOSTS ADDRESS PORTS AGE
      wordpress-ingress <none> wordpress.kubernetes.lab 80 39s
      

  • Providing proxies / DNS is available, Wordpress should be available at wordpress.kubernetes.lab

[WordPress-setup-screen
Wordpress Setup Screen

Conclusion

At this point there should be a highly available Kubernetes cluster over 3 control-plane nodes, with 3 worker nodes along with a functioning storage class for dynamic provisioning of persistent volume claims and an ingress controller to allow HTTP access into the running applications. For testing purposes there is a Wordpress deployment backed by the storage class to retain data.

Cleanup

To cleanup this lab there is a tear down playbook in the GitHub repo

  • To run the cleanup playbook

ansible-playbook -i inventory.yaml lab-setup/playbooks/teardown.yaml