Introduction

This lab walks through the setup of a small Gluster 3.8.4 cluster, provisioning of some test criterias and eventually the upgrade to the latest stable version of Gluster at the time of writing (6.0). This is only a small lab and does not cover other systems like NFS-Ganesha included in RHGS as the clients are all using the Gluster fuse driver. Full upgrade instructions for RHGS can be found here.

Requirements

This lab requires a libvirt host with at least:

  • Memory >= 30GB
  • vCPU >= 14
  • Disk >= 200GB

VMs created with this lab

HostnamevCPUMemoryOS DiskBrick SizeBrick CountOptional
gluster0148GB40GB10GB2No
gluster0248GB40GB10GB2No
gluster0348GB40GB10GB2No
gluster-client24GB9GB(from Gluster)Yes

Setup

VMs

Bash script to set up 3 VMs with 2 extra 10GB drives each in order to install Gluster. This lab uses the RHEL 7.9 image.



#!/bin/bash
                                                         
# Node building vars                                                                                               
image_dir="/var/lib/libvirt/images"
base_os_img="/var/lib/libvirt/images/iso/rhel-server-7.9-x86_64-kvm.qcow2"
ssh_pub_key="/root/.ssh/id_ed25519.pub"

# Network Vars
dns_domain="gluster.lab"

# Extra Vars
root_password="password"
os_drive_size="40G"
tmp_dir="/tmp"
                                                         
                                                         
##### Start #####            
                                                         
# Exit on any failure  
                                                         
set -e           
                                                         
# Create Network files                                                                                             
                                                                                                                   
echo "Creating gluster-lab xml file"                                                                               
                                                         
cat <<EOF > $tmp_dir/gluster-lab.xml
<network>                            
  <name>gluster-lab</name>
  <bridge name="virbr1234"/>
  <forward mode="nat"/>
  <domain name="gluster.lab"/>
  <ip address="10.44.50.1" netmask="255.255.255.0">    <dhcp>
      <range start="10.44.50.10" end="10.44.50.100"/>
    </dhcp>
  </ip>   
</network>               
EOF

echo "Creating gluster network in libvirt"

check_rep=$(virsh net-list --all | grep gluster-lab >/dev/null && echo "0" || echo "1")

networks=()

if [[ $check_rep == "1" ]]; then
  networks+=("gluster-lab")
fi

net_len=$(echo "${#networks[@]}")

if [ "$net_len" -ge 1 ]; then
  for network in ${networks[@]}; do 
    virsh net-define $tmp_dir/$network.xml
    virsh net-start $network
    virsh net-autostart $network
  done
else
  echo "Network already created"
fi

# Check OS image exists

if [ -f "$base_os_img" ]; then
  echo "Base OS image exists"
else
  echo "Base image doesn't exist ($base_os_img). Exiting"
  exit 1
fi

echo "Building Gluster nodes"

count=1

for i in `seq -w 01 03`; do 
  check=$(virsh list --all | grep gluster$i.$dns_domain > /dev/null && echo "0" || echo "1" )
  if [[ $check == "0" ]]; then
    echo "gluster$i.$dns_domain already exists"
    count=$(( $count + 1 ))
  else
    echo "Starting gluster$i"
    echo "Creating $image_dir/gluster$i.$dns_domain.qcow2 at $os_drive_size"
    qemu-img create -f qcow2 $image_dir/gluster$i.$dns_domain.qcow2 $os_drive_size
    for c in {1..2}; do 
      qemu-img create -f qcow2 $image_dir/gluster$i-disk$c.$dns_domain.qcow2 10G
    done
    echo "Resizing base OS image"
    virt-resize --expand /dev/sda1 $base_os_img $image_dir/gluster$i.$dns_domain.qcow2
    echo "Customising OS for gluster$i"
    virt-customize -a $image_dir/gluster$i.$dns_domain.qcow2 \
      --root-password password:$root_password \
      --uninstall cloud-init \
      --hostname gluster$i.$dns_domain \
      --ssh-inject root:file:$ssh_pub_key \
      --selinux-relabel
    echo "Defining gluster$i"
    virt-install --name gluster$i.$dns_domain \
      --virt-type kvm \
      --memory 8192 \
      --vcpus 4 \
      --boot hd,menu=on \
      --disk path=$image_dir/gluster$i.$dns_domain.qcow2,device=disk \
      --disk path=$image_dir/gluster$i-disk1.$dns_domain.qcow2,device=disk \
      --disk path=$image_dir/gluster$i-disk2.$dns_domain.qcow2,device=disk \
      --os-type Linux \
      --os-variant centos7 \
      --network network:gluster-lab \
      --graphics spice \
      --noautoconsole
    
    count=$(( $count + 1 ))
  fi
done
# Print running VMs

virsh list

  • Demo
  • Get the IP addresses from libvirt DHCP

# virsh net-dhcp-leases gluster-lab
 Expiry Time           MAC address         Protocol   IP address       Hostname    Client ID or DUID
------------------------------------------------------------------------------------------------------
 2021-01-08 12:19:51   52:54:00:0f:d5:82   ipv4       10.44.50.91/24   gluster01   -
 2021-01-08 12:22:24   52:54:00:a9:e5:be   ipv4       10.44.50.92/24   gluster02   -
 2021-01-08 12:23:52   52:54:00:d7:01:21   ipv4       10.44.50.67/24   gluster03   -

Requirements

  • Simple Ansible inventory file

---
gluster:
  hosts:
    gluster01:
      ansible_host: 10.44.50.91
    gluster02:
      ansible_host: 10.44.50.92
    gluster03:
      ansible_host: 10.44.50.67
  • Ensure Chrony is installed, started and enabled

# ansible -i inventory.yaml all -m package -a "name=chrony state=installed"

# ansible -i inventory.yaml all -m service -a "name=chronyd state=started enabled=true"

Subscriptions

  • Attach entitlement pools to the system

# ansible -i inventory.yaml all -m shell -a "subscription-manager attach --pool=<Pool ID>"
  • Enable the RHEL and Gluster channel

# ansible -i inventory.yaml all -m shell -a "subscription-manager repos --enable=rhel-7-server-rpms --enable=rh-gluster-3-for-rhel-7-server-rpms"

Install Gluster 3.8.4

  • Install the packages

# ansible -i inventory.yaml all -m package -a "name=glusterfs-server-3.8.4 state=installed"
  • Start the Gluster service

# ansible -i inventory.yaml all -m service -a "name=glusterd state=started"

Setup Gluster Bricks


# fdisk /dev/vdb                                                                                                                         [241/7656]
Welcome to fdisk (util-linux 2.23.2).                                                                                                                                
                                                                                                                                                                     
Changes will remain in memory only, until you decide to write them.                                                                                                  
Be careful before using the write command.                                                                                                                           
                                                                                                                                                                     
Device does not contain a recognized partition table                                                                                                                 
Building a new DOS disklabel with disk identifier 0x37bfe7a8.                                                                                                        
                                                                                                                                                                     
Command (m for help): p                                                                                                                                              
                                                                                                                                                                     
Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors                                                                                                          
Units = sectors of 1 * 512 = 512 bytes                                                                                                                               
Sector size (logical/physical): 512 bytes / 512 bytes                                                                                                                
I/O size (minimum/optimal): 512 bytes / 512 bytes                                                                                                                    
Disk label type: dos                                                                                                                                                 
Disk identifier: 0x37bfe7a8                                                                                                                                          
                                                                                                                                                                     
   Device Boot      Start         End      Blocks   Id  System                                                                                                       
                                                                                                                                                                     
Command (m for help): n                                                                                                                                              
Partition type:                                                                                                                                                      
   p   primary (0 primary, 0 extended, 4 free)                                                                                                                       
   e   extended                                                                                                                                                      
Select (default p): p                                                                                                                                                
Partition number (1-4, default 1): 1                                                                                                                                 
First sector (2048-20971519, default 2048):                                                                                                                          
Using default value 2048                                                                                                                                             
Last sector, +sectors or +size{K,M,G} (2048-20971519, default 20971519):                                                                                             
Using default value 20971519                                                                                                                                         
Partition 1 of type Linux and of size 10 GiB is set                                                                                                                  

Command (m for help): t
Selected partition 1
Hex code (type L to list all codes): 8e
Changed type of partition 'Linux' to 'Linux LVM'

Command (m for help): p                                                                                                                                              
                                                                                                                                                                     
Disk /dev/vdb: 10.7 GB, 10737418240 bytes, 20971520 sectors                                                                                                          
Units = sectors of 1 * 512 = 512 bytes                                                                                                                               
Sector size (logical/physical): 512 bytes / 512 bytes                                                                                                                
I/O size (minimum/optimal): 512 bytes / 512 bytes                                                                                                                    
Disk label type: dos                                                                                                                                                 
Disk identifier: 0x37bfe7a8                                                                                                                                          
                                                                                                                                                                     
   Device Boot      Start         End      Blocks   Id  System                                                                                                       
/dev/vdb1            2048    20971519    10484736   83  'Linux LVM'                                                                                                        
                                                                                                                                                                     
Command (m for help): w                                                                                                                                              
The partition table has been altered!                                                                                                                                
                                                                                                                                                                     
Calling ioctl() to re-read partition table.                                                                                                                          
Syncing disks.
Ansible drive partitions

# ansible -i inventory.yaml all -m parted -a "device=/dev/vdb number=1 flags=lvm state=present"
# ansible -i inventory.yaml all -m parted -a "device=/dev/vdc number=1 flags=lvm state=present"


# pvcreate /dev/vdb1
# pvcreate /dev/vdc1
# pvs

  • Add PVs to VGs


# vgcreate b01 /dev/vdb1
# vgcreate b02 /dev/vdc1
# vgs

  • Add VGs to LVs


# lvcreate -l 100%FREE -T b01 -n brick01
# lvcreate -l 100%FREE -T b02 -n brick02
# lvs

  • Create XFS file system

# mkfs.xfs -i size=512 /dev/b01/brick01 
# mkfs.xfs -i size=512 /dev/b02/brick02
Ansible PV creation

# ansible -i inventory.yaml all -m lvg -a "vg=b01 pvs=/dev/vdb1 state=present"
# ansible -i inventory.yaml all -m lvg -a "vg=b02 pvs=/dev/vdc1 state=present"

# ansible -i inventory.yaml all -m lvol -a "vg=b01 lv=brick01 pvs=/dev/vdb1 size=+100%FREE"
# ansible -i inventory.yaml all -m lvol -a "vg=b02 lv=brick02 pvs=/dev/vdc1 size=+100%FREE"

# ansible -i inventory.yaml all -m filesystem -a "fstype=xfs dev=/dev/b01/brick01"
# ansible -i inventory.yaml all -m filesystem -a "fstype=xfs dev=/dev/b02/brick02"

# mkdir -p /bricks/brick01
# mkdir -p /bricks/brick02
Ansible create mountpoint

# ansible -i inventory.yaml all -m file -a "path=/bricks/brick01 recurse=yes state=directory"
# ansible -i inventory.yaml all -m file -a "path=/bricks/brick02 recurse=yes state=directory"

/dev/b01/brick01        /bricks/brick01 xfs     rw,noatime,inode64,nouuid   1   2
/dev/b02/brick02        /bricks/brick02 xfs     rw,noatime,inode64,nouuid   1   2
  • Mount bricks

# mount -a
  • Check bricks have mounted

# df -h | grep brick
Ansible mount

# ansible -i inventory.yaml all -m mount -a "src=/dev/b01/brick01 path=/bricks/brick01 opts=rw,noatime,inode64,nouuid fstype=xfs state=present"
# ansible -i inventory.yaml all -m mount -a "src=/dev/b02/brick02 path=/bricks/brick02 opts=rw,noatime,inode64,nouuid fstype=xfs state=present"

# ansible -i inventory.yaml all -m mount -a "src=/dev/b01/brick01 path=/bricks/brick01 opts=rw,noatime,inode64,nouuid fstype=xfs state=mounted"
# ansible -i inventory.yaml all -m mount -a "src=/dev/b02/brick02 path=/bricks/brick02 opts=rw,noatime,inode64,nouuid fstype=xfs state=mounted"


# mkdir -p /bricks/brick01/brick
# mkdir -p /bricks/brick02/brick
  • Repeat on the remaining two Gluster nodes.
Ansible create brick directory

# ansible -i inventory.yaml all -m file -a "path=/bricks/brick01/brick recurse=yes state=directory"
# ansible -i inventory.yaml all -m file -a "path=/bricks/brick02/brick recurse=yes state=directory"

Create Gluster Volumes

  • Start Gluster on all nodes

# ansible -i inventory.yaml all -m service -a "name=glusterd state=started"
  • From one node, probe the other two to bring them into the cluster

# gluster peer probe gluster02
peer probe: success. 
# gluster peer probe gluster03
peer probe: success.
  • Creating two replicated volumes over 3 drives with a replica of 3

# gluster volume create rep-vol-1 replica 3 \
  gluster01:/bricks/brick01/brick \
  gluster02:/bricks/brick01/brick \
  gluster03:/bricks/brick01/brick

volume create: rep-vol-1: success: please start the volume to access data
  
# gluster volume create rep-vol-2 replica 3 \
  gluster01:/bricks/brick02/brick \
  gluster02:/bricks/brick02/brick \
  gluster03:/bricks/brick02/brick

volume create: rep-vol-2: success: please start the volume to access data

  • Check the volumes have created

# gluster volume list
rep-vol-1
rep-vol-2
  • Start the volumes

# gluster volume start rep-vol-1
volume start: rep-vol-1: success

# gluster volume start rep-vol-2
volume start: rep-vol-2: success

  • Check status of the volumes (rep-vol-1 example)

# gluster volume status rep-vol-1
Status of volume: rep-vol-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/bricks/brick01/brick       49152     0          Y       11213
Brick gluster02:/bricks/brick03/brick       49152     0          Y       10977
Brick gluster03:/bricks/brick05/brick       49152     0          Y       10945
Self-heal Daemon on localhost               N/A       N/A        Y       11282
Self-heal Daemon on gluster02               N/A       N/A        Y       11038
Self-heal Daemon on gluster03               N/A       N/A        Y       11007
 
Task Status of Volume rep-vol-1
------------------------------------------------------------------------------
There are no active volume tasks

Test Setups

Add Data to the Volumes

  • Install glusterfs-fuse on client machine

Make sure the client can resolve the Gluster host names.


# yum install glusterfs-fuse -y
  • Mount first Gluster volume

# mount -t glusterfs -o acl gluster01:/rep-vol-1 /mnt
  • Copy a few files to the mount and md5 sum them

# cp /var/lib/libvirt/images/iso/rhcos-qemu-4.5.6.x86_64.qcow2 /mnt/

# md5sum /mnt/rhcos-qemu-4.5.6.x86_64.qcow2 
56ad9157aaf710ba61f8f42a780011e5  /mnt/rhcos-qemu-4.5.6.x86_64.qcow2

Use Gluster for a VM

Host file entries may be required in order to mount the volume

Using netfs for the pool as the glusterfs locally is not backward compatible with Gluster 3.8.4

  • Set permissions on the Gluster pool

From a gluster node


# gluster volume set rep-vol-2 storage.owner-uid <qemu uid>
# gluster volume set rep-vol-2 storage.owner-gid <qemu gid>
# gluster volume set rep-vol-2 server.allow-insecure on
# gluster volume set rep-vol-2 rpc-auth-allow on

  • Make mount directory

From the libvirt host


# mkdir -p /var/lib/virt/glusterfs
  • Define XML for a Gluster storage pool

<pool type="netfs">
  <name>glusterfs</name>
  <source>
    <host name="gluster01"/>
    <dir path="rep-vol-2"/>
    <format type='glusterfs'/>
  </source>
  <target>
    <path>/var/lib/virt/glusterfs</path>
  </target>
</pool>
  • Create the storage pool

# virsh pool-define glusterfs-pool.xml 
Pool glusterfs defined from glusterfs-pool.xml

# virsh pool-start glusterfs
Pool glusterfs started
  • Confirm pool has started

# virsh pool-list --all
 Name              State    Autostart
---------------------------------------
 default           active   yes
 glusterfs    active   no
  • Create VM OS disk on Gluster

# qemu-img create -f qcow2 /var/lib/virt/glusterfs/gluster-client.gluster.lab.qcow2 9G
  • Resize the OS into the client image

# virt-resize --expand /dev/sda1 /var/lib/libvirt/images/iso/rhel-server-7.9-x86_64-kvm.qcow2 /var/lib/virt/glusterfs/gluster-client.gluster.lab.qcow2
  • Customise VM

# virt-customize -a /var/lib/virt/glusterfs/gluster-client.gluster.lab.qcow2 \
  --root-password password:password \
  --uninstall cloud-init \
  --hostname gluster-client.gluster.lab \
  --ssh-inject root:file:/root/.ssh/id_ed25519.pub \
  --selinux-relabel
  • Create VM

SELinux on the libvirt host may have issues starting the VM from a fuse mount. For this example, SELinux has been set to permissive


# virt-install --name gluster-client.gluster.lab \
  --virt-type kvm \
  --memory 4096 \
  --vcpus 2 \
  --boot hd,menu=on \
  --disk path=/var/lib/virt/glusterfs/gluster-client.gluster.lab.qcow2,device=disk \
  --os-type Linux \
  --os-variant centos7 \
  --network network:gluster-lab \
  --graphics spice \
  --noautoconsole

Install WordPress for testing

  • Install Podman on the client node

# subscription-manager repos --enable=rhel-7-server-rpms --enable=rhel-7-server-extras-rpms

# yum install podman

  • Create the WordPress and MariaDB containers into a Pod

# podman pod create --publish 8080:80
05e2219fc0a51acb4bf9ef13b528c6c0a1c54855efaf2f8f25148b7e2d321a47

# podman pod ps
POD ID         NAME                 STATUS    CREATED          # OF CONTAINERS   INFRA ID
05e2219fc0a5   stupefied_spence     Created   38 seconds ago   1                 292b16f2c58b

# podman volume create mariadb
# podman volume create wordpress

# podman run --pod=<pod name> -d \
  -e MYSQL_ROOT_PASSWORD=d0ddle \
  -e MYSQL_DATABASE=wordpress \
  -e MYSQL_USER=wordpress \
  -e MYSQL_PASSWORD=password \
  -v mariadb:/var/lib/mysql \
  mariadb

# podman run --pod=<pod name> -d \
  --name wordpress \
  -e WORDPRESS_DB_HOST=127.0.0.1 \
  -e WORDPRESS_DB_USER=wordpress \
  -e WORDPRESS_DB_PASSWORD=password \
  -e WORDPRESS_DB_NAME=wordpress \
  -v wordpress:/var/www/html \
  wordpress

  • Setup WordPress with WordPress cli


# podman run --pod=<pod name> --rm -it --volumes-from wordpress  wordpress:cli bash

$ wp core install --url="gluster-client.gluster.lab:8080" \
  --title="Upgrade Test Site" \
  --admin_user=admin \
  --admin_password="password" \
  --admin_email=123@123.com

  • Site should now be available from http://gluster-client.gluster.lab:8080 assuming DNS or hosts file entries are in place

  • From the wp-cli container, generate a handful of posts to check the database is working during the upgrade



$ for i in {1..15}; do \
  wp post create \
  --post_title="Post $i" \
  --post_content="This is post number $i of 15." \
  --post_status=publish \
  --post_author=admin; \
  done



$ wp post list
+----+--------------+-------------+---------------------+-------------+
| ID | post_title   | post_name   | post_date           | post_status |
+----+--------------+-------------+---------------------+-------------+
| 22 | Post 14      | post-14     | 2021-01-12 15:55:03 | publish     |
| 23 | Post 15      | post-15     | 2021-01-12 15:55:03 | publish     |
| 21 | Post 13      | post-13     | 2021-01-12 15:55:02 | publish     |
| 20 | Post 12      | post-12     | 2021-01-12 15:55:01 | publish     |
| 19 | Post 11      | post-11     | 2021-01-12 15:55:00 | publish     |
| 18 | Post 10      | post-10     | 2021-01-12 15:54:59 | publish     |
| 16 | Post 8       | post-8      | 2021-01-12 15:54:58 | publish     |
| 17 | Post 9       | post-9      | 2021-01-12 15:54:58 | publish     |
| 14 | Post 6       | post-6      | 2021-01-12 15:54:56 | publish     |
| 15 | Post 7       | post-7      | 2021-01-12 15:54:56 | publish     |
| 13 | Post 5       | post-5      | 2021-01-12 15:54:55 | publish     |
| 12 | Post 4       | post-4      | 2021-01-12 15:54:54 | publish     |
| 11 | Post 3       | post-3      | 2021-01-12 15:54:53 | publish     |
| 10 | Post 2       | post-2      | 2021-01-12 15:54:52 | publish     |
| 9  | Post 1       | post-1      | 2021-01-12 15:54:51 | publish     |
| 1  | Hello world! | hello-world | 2021-01-12 15:24:35 | publish     |
+----+--------------+-------------+---------------------+-------------+

  • If it’s required to have continual database activity, something like this can be done to add a comment to a random post every 30 seconds:

$ for i in $(wp post list | awk '{print $1}' | grep -v ID); do arr+=( "$i" ); done
$ RANDOM=$$$(date +%s)
$ c=1
$ while true; do \
  s=${arr[$RANDOM % ${#arr[@]}]}; \
  wp comment create --comment_post_ID=$s --comment_content="comment number $c" --comment_author="wp-cli"; \
  sleep 30; \
  c=$(( $c + 1 )); \
  done

Upgrade to Gluster 6

Following the guides documented here

During the upgrade ensure that the mount is still working from the adding data to volumes section. If there is a VM set up as well, keep checking this

  • From the first Gluster node check the peer and volume status

# gluster peer status

# gluster volume status

  • Check the are no pending self-heals on either volume.

# gluster volume heal rep-vol-1 info

# gluster volume heal rep-vol-2 info

  • Backup the following directories if they exist
    • /var/lib/glusterd
    • /etc/samba
    • /etc/ctdb
    • /etc/glusterfs
    • /var/lib/samba
    • /var/lib/ctdb
    • /var/run/gluster/shared_storage/nfs-ganesha


# tar -czvf gluster_backup.tar.gz /var/lib/glusterd /etc/samba /etc/ctdb /etc/glusterfs /var/lib/samba /var/lib/ctdb /var/run/gluster/shared_storage/nfs-ganesha

  • Stop Gluster on the node and ensure it has stopped

# systemctl stop glusterd
# pkill glusterfs
# pkill glusterfsd
# pgrep gluster
  • Run yum update on the node

# yum update

  • Disable Gluster systemd unit to ensure the node comes back healthy

# systemctl disable glusterd
  • Reboot the node

# shutdown -r now "Shutting down for Gluster upgrade"
  • When the node is back up, check the version and brick mount points

# gluster --version
glusterfs 6.0

# df -h | grep bricks
/dev/mapper/b02-brick02   10G  2.9G  7.2G  29% /bricks/brick02
/dev/mapper/b01-brick01   10G  2.4G  7.7G  24% /bricks/brick0
  • Start Gluster

# systemctl start glusterd
  • Check Gluster volume status

# gluster volume status
Status of volume: rep-vol-1
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/bricks/brick01/brick       49152     0          Y       7529 
Brick gluster02:/bricks/brick01/brick       49152     0          Y       18426
Brick gluster03:/bricks/brick01/brick       49152     0          Y       18746
Self-heal Daemon on localhost               N/A       N/A        Y       7551 
Self-heal Daemon on gluster03               N/A       N/A        Y       18813
Self-heal Daemon on gluster02               N/A       N/A        Y       18493
 
Task Status of Volume rep-vol-1
------------------------------------------------------------------------------
There are no active volume tasks
 
Status of volume: rep-vol-2
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster01:/bricks/brick02/brick       49153     0          Y       7540 
Brick gluster02:/bricks/brick02/brick       49153     0          Y       18473
Brick gluster03:/bricks/brick02/brick       49153     0          Y       18793
Self-heal Daemon on localhost               N/A       N/A        Y       7551 
Self-heal Daemon on gluster02               N/A       N/A        Y       18493
Self-heal Daemon on gluster03               N/A       N/A        Y       18813
 
Task Status of Volume rep-vol-2
------------------------------------------------------------------------------
There are no active volume tasks
  • Start a self heal on the two volumes

# gluster volume heal rep-vol-1
# gluster volume heal rep-vol-2
  • Check heal info

If there is a running VM on the rep-vol-2 volume, it is to be expected there will be a heal operation ongoing for a while


# gluster volume heal rep-vol-1 info

# gluster volume heal rep-vol-2 info
Brick gluster01:/bricks/brick02/brick
Status: Connected
Number of entries: 0

Brick gluster02:/bricks/brick02/brick
/gluster-client.gluster.lab.qcow2 - Possibly undergoing heal
Status: Connected
Number of entries: 1

Brick gluster03:/bricks/brick02/brick
/gluster-client.gluster.lab.qcow2 - Possibly undergoing heal
Status: Connected
Number of entries: 1
  • Re-enable Gluster on the node

# systemctl enable glusterd

# gluster volume set all cluster.op-version 70000

Summary

This is a very basic upgrade lab as in a production environment, there is likely to be a lot more I/O against the bricks and multiple other services running to support the environment. This page details a lot of the other services in detail and should be read before upgrading an important environment.

The upgrade in the lab was managed without a break in I/O to the running VM.

Cleanup

This teardown script should remove all the VMs and their associated storage. It will remove the gluster pool and client VM first.


#!/bin/bash                                                                                                                                                          
                                                                                                                                                                     
# Network Vars                                                                    
dns_domain="gluster.lab"                                                          

                                                                                  
##### Start #####

# Remove the client VM first

virsh destroy gluster-client.$dns_domain
virsh undefine gluster-client.$dns_domain

virsh pool-destroy glusterfs
virsh pool-undefine glusterfs 

# Remove Gluster VMs

for i in `seq -w 01 03`; do
  virsh destroy gluster$i.$dns_domain
  virsh undefine gluster$i.$dns_domain --remove-all-storage
done

# Remove Network files

echo "Removing gluster-lab xml file"

rm $tmp_dir/gluster-lab.xml -rf

echo "Removing ceph networks in libvirt"

for network in gluster-lab; do
  virsh net-destroy $network
  virsh net-undefine $network
done