Ceph Octopus on CentOS 8 Stream Lab
Contents
Summary⌗
This is the starting point for future ceph labs or test. It is designed with a mixture of drive sizes to allow for different labs and scenarios.
Key difference between this and the Ceph Octopus Lab is this will be built on VMs running CentOS 8 Streams.
It should take around an hour to build from scratch using the quick setup scripts
Setup VMs⌗
There will be 13 VMs set up and 2 networks. The Detailed Setup shows the full setup on 1 OSD node. Using the Quick Setup script will create the environment from scratch.
Base System Requirements (Does not include optional nodes)
- CPU >= 44
- Memory >= 82GB
- Disk >= 580GB
Host | Role | Count | vCPU | Memory | Disk Size | OSD Disks | OSD Disk Size | Optional |
---|---|---|---|---|---|---|---|---|
bastion | bastion | 1 | 2 | 2GB | 40GB | 0 | No | |
grafana | grafana | 1 | 2 | 4GB | 40GB | 0 | No | |
monitor | mon/mgr | 3 | 4 | 4GB | 40GB | 0 | No | |
t1-osd | osd | 4 | 4 | 8GB | 40GB | 4 | 5GB | No |
t2-osd | osd | 4 | 4 | 8GB | 40GB | 4 | 10GB | No |
rgw | rgw | 2 | 2 | 4GB | 40GB | 0 | Yes | |
mds | mds | 2 | 4 | 8GB | 40GB | 0 | Yes | |
iscsi | iscsi | 2 | 4 | 8GB | 40GB | 0 | Yes |
Detailed Setup⌗
Create Networks⌗
- Ceph presentation network
<network>
<name>ceph-presentation</name>
<bridge name="virbr3300"/>
<forward mode="nat"/>
<domain name="ceph.lab"/>
<ip address="10.44.20.1" netmask="255.255.255.0">
<dhcp>
<range start="10.44.20.200" end="10.44.20.210"/>
</dhcp>
</ip>
</network>
- Ceph replication network
<network>
<name>ceph-replication</name>
<bridge name="virbr3301"/>
<ip address="172.16.20.1" netmask="255.255.255.0">
<dhcp>
<range start="172.16.20.200" end="172.16.20.210"/>
</dhcp>
</ip>
</network>
- Create these in libvirt
# virsh net-define ceph-presentation.xml
Network ceph-presentation defined from ceph-presentation.xml
# virsh net-start ceph-presentation
Network ceph-presentation started
# virsh net-autostart ceph-presentation
Network ceph-presentation marked as autostarted
# virsh net-define ceph-replication.xml
Network ceph-replication defined from ceph-replication.xml
# virsh net-start ceph-replication
Network ceph-replication started
# virsh net-autostart ceph-replication
Network ceph-replication marked as autostarted
Create VM Example⌗
This will create an OSD node. For other nodes, there wont be the need for as many drives to be created.
- Create the OS drive for the node
# qemu-img create -f qcow2 /var/lib/libvirt/images/ceph-osd-t1-node01.qcow2 40G
Formatting '/var/lib/libvirt/images/ceph-osd-t1-node01.qcow2', fmt=qcow2 cluster_size=65536 compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
- Expand the OS base image into the drive. For this setup it will be using CentOS 7
# virt-resize --expand /dev/sda1 /var/lib/libvirt/images/iso/CentOS-Stream-GenericCloud-8-20201217.0.x86_64.qcow2 /var/lib/libvirt/images/ceph-osd-t1-node01.qcow2
- Customise the OS so it can be used
# virt-customize -a /var/lib/libvirt/images/ceph-osd-t1-node01.qcow2 \
--root-password password:password \
--uninstall cloud-init \
--hostname ceph-osd-t1-node01 \
--ssh-inject root:file:/root/.ssh/id_ed25519.pub \
--selinux-relabel
- Create the 4 OSD drives (5GB for t1 nodes)
# for i in `seq -w 01 04`; do qemu-img create -f qcow2 /var/lib/libvirt/images/ceph-osd-t1-node01-osd$i.qcow2 40G; done
Formatting '/var/lib/libvirt/images/ceph-osd-t1-node01-osd01.qcow2', fmt=qcow2 cluster_size=65536 compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
Formatting '/var/lib/libvirt/images/ceph-osd-t1-node01-osd02.qcow2', fmt=qcow2 cluster_size=65536 compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
Formatting '/var/lib/libvirt/images/ceph-osd-t1-node01-osd03.qcow2', fmt=qcow2 cluster_size=65536 compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
Formatting '/var/lib/libvirt/images/ceph-osd-t1-node01-osd04.qcow2', fmt=qcow2 cluster_size=65536 compression_type=zlib size=42949672960 lazy_refcounts=off refcount_bits=16
- Define the VM, with both networks and all drives attached. Remove
--dry-run
and--print-xml
in order to create the domain.
# virt-install --name ceph-osd-t1-node01.ceph.lab \
--virt-type kvm \
--memory 16384 \
--vcpus 4 \
--boot hd,menu=on \
--disk path=/var/lib/libvirt/images/ceph-osd-t1-node01.qcow2,device=disk \
--disk path=/var/lib/libvirt/images/ceph-osd-t1-node01-osd01.qcow2,device=disk \
--disk path=/var/lib/libvirt/images/ceph-osd-t1-node01-osd02.qcow2,device=disk \
--disk path=/var/lib/libvirt/images/ceph-osd-t1-node01-osd03.qcow2,device=disk \
--disk path=/var/lib/libvirt/images/ceph-osd-t1-node01-osd04.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--network network:ceph-replication \
--graphics spice \
--noautoconsole \
--dry-run \
--print-xml
Quick Setup⌗
Script options are set as variables. By default it won’t build any of the optional nodes. If the vars are set to yes
this will be build them. It is somewhat idempotent as well. A teardown script is also available to clean this all up.
The nodes built by the script (including optional)
Hostname | Public IP | Replication IP |
---|---|---|
bastion.ceph.lab | DHCP | None |
grafana.ceph.lab | DHCP | None |
ceph-mon01.ceph.lab | 10.44.20.21 | 172.16.20.21 |
ceph-mon02.ceph.lab | 10.44.20.22 | 172.16.20.22 |
ceph-mon03.ceph.lab | 10.44.20.23 | 172.16.20.23 |
ceph-t1-osd01.ceph.lab | 10.44.20.31 | 172.16.20.31 |
ceph-t1-osd02.ceph.lab | 10.44.20.32 | 172.16.20.32 |
ceph-t1-osd03.ceph.lab | 10.44.20.33 | 172.16.20.33 |
ceph-t1-osd04.ceph.lab | 10.44.20.34 | 172.16.20.34 |
ceph-t2-osd01.ceph.lab | 10.44.20.41 | 172.16.20.41 |
ceph-t2-osd02.ceph.lab | 10.44.20.42 | 172.16.20.42 |
ceph-t2-osd03.ceph.lab | 10.44.20.43 | 172.16.20.43 |
ceph-t2-osd04.ceph.lab | 10.44.20.44 | 172.16.20.44 |
ceph-rgw01.ceph.lab | 10.44.20.111 | None |
ceph-rgw02.ceph.lab | 10.44.20.112 | None |
ceph-mds01.ceph.lab | 10.44.20.121 | None |
ceph-mds02.ceph.lab | 10.44.20.122 | None |
ceph-iscsi01.ceph.lab | 10.44.20.131 | None |
ceph-iscsi02.ceph.lab | 10.44.20.132 | None |
Scripts can also be found on GitHub
#!/bin/bash
# Node building vars
image_dir="/var/lib/libvirt/images"
base_os_img="/var/lib/libvirt/images/iso/CentOS-Stream-GenericCloud-8-20201217.0.x86_64.qcow2"
ssh_pub_key="/root/.ssh/id_ed25519.pub"
# Network Vars
dns_domain="ceph.lab"
# Extra Vars
root_password="password"
os_drive_size="40G"
tmp_dir="/tmp"
# Ceph Extra nodes
rgw=no
mds=no
iscsi=no
##### Start #####
# Exit on any failure
set -e
# Create Network files
echo "Creating ceph-presentation xml file"
cat <<EOF > $tmp_dir/ceph-presentation.xml
<network>
<name>ceph-presentation</name>
<bridge name="virbr3300"/>
<forward mode="nat"/>
<domain name="ceph.lab"/>
<ip address="10.44.20.1" netmask="255.255.255.0">
<dhcp>
<range start="10.44.20.200" end="10.44.20.210"/>
</dhcp>
</ip>
</network>
EOF
echo "Creating ceph-replicaton xml file"
cat <<EOF >$tmp_dir/ceph-replication.xml
<network>
<name>ceph-replication</name>
<bridge name="virbr3301"/>
<ip address="172.16.20.1" netmask="255.255.255.0">
<dhcp>
<range start="172.16.20.200" end="172.16.20.210"/>
</dhcp>
</ip>
</network>
EOF
echo "Creating Ceph networks in libvirt"
check_rep=$(virsh net-list --all | grep ceph-replication >/dev/null && echo "0" || echo "1")
check_pres=$(virsh net-list --all | grep ceph-presentation >/dev/null && echo "0" || echo "1")
networks=()
if [[ $check_rep == "1" ]]; then
networks+=("ceph-replication")
fi
if [[ $check_pres == "1" ]]; then
networks+=("ceph-presentation")
fi
net_len=$(echo "${#networks[@]}")
echo "Creating networks ${#networks[@]}"
if [ "$net_len" -ge 1 ]; then
for network in ${networks[@]}; do
virsh net-define $tmp_dir/$network.xml
virsh net-start $network
virsh net-autostart $network
done
fi
# Check OS image exists
if [ -f "$base_os_img" ]; then
echo "Base OS image exists"
else
echo "Base image doesn't exist ($base_os_img). Exiting"
exit 1
fi
# Build OS drives for machines
echo "Starting build of VMs"
echo "Building Bastion & Grafana drives"
for node in bastion grafana; do
check=$(virsh list --all | grep $node.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "0" ]]; then
echo "$node.$dns_domain exists"
else
echo "Starting $node"
echo "Creating $image_dir/$node.$dns_domain.qcow2 at $os_drive_size"
qemu-img create -f qcow2 $image_dir/$node.$dns_domain.qcow2 $os_drive_size
echo "Resizing base OS image"
virt-resize --expand /dev/sda1 $base_os_img $image_dir/$node.$dns_domain.qcow2
echo "Customising OS for $node"
virt-customize -a $image_dir/$node.$dns_domain.qcow2 \
--root-password password:$root_password \
--uninstall cloud-init \
--hostname $node.$dns_domain \
--ssh-inject root:file:$ssh_pub_key \
--selinux-relabel
fi
done
check=$(virsh list --all | grep bastion.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "1" ]]; then
echo "Defining Bastion VM"
virt-install --name bastion.$dns_domain \
--virt-type kvm \
--memory 2048 \
--vcpus 2 \
--boot hd,menu=on \
--disk path=$image_dir/bastion.$dns_domain.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--graphics spice \
--noautoconsole
fi
check=$(virsh list --all | grep grafana.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "1" ]]; then
echo "Defining Grafana VM"
virt-install --name grafana.$dns_domain \
--virt-type kvm \
--memory 4096 \
--vcpus 2 \
--boot hd,menu=on \
--disk path=$image_dir/grafana.$dns_domain.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--graphics spice \
--noautoconsole
fi
echo "Building Monitor VMs"
count=1
for mon in `seq -w 01 03`; do
check=$(virsh list --all | grep ceph-mon$mon.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "0" ]]; then
echo "ceph-mon$mon.$dns_domain already exists"
count=$(( $count + 1 ))
else
echo "Creating eth0 ifcfg file"
mkdir -p $tmp_dir/ceph-mon$mon
cat <<EOF > $tmp_dir/ceph-mon$mon/ifcfg-eth0
TYPE=Ethernet
NAME=eth0
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.44.20.2$count
NETMASK=255.255.255.0
GATEWAY=10.44.20.1
DNS1=10.44.20.1
ONBOOT=yes
DEFROUTE=yes
EOF
echo "Creating eth1 ifcfg file"
cat <<EOF > $tmp_dir/ceph-mon$mon/ifcfg-eth1
TYPE=Ethernet
NAME=eth1
DEVICE=eth1
BOOTPROTO=static
IPADDR=172.16.20.2$count
NETMASK=255.255.255.0
EOF
echo "Starting ceph-mon$mon"
echo "Creating $image_dir/ceph-mon$mon.$dns_domain.qcow2 at $os_drive_size"
qemu-img create -f qcow2 $image_dir/ceph-mon$mon.$dns_domain.qcow2 $os_drive_size
echo "Resizing base OS image"
virt-resize --expand /dev/sda1 $base_os_img $image_dir/ceph-mon$mon.$dns_domain.qcow2
echo "Customising OS for ceph-mon$mon"
virt-customize -a $image_dir/ceph-mon$mon.$dns_domain.qcow2 \
--root-password password:$root_password \
--uninstall cloud-init \
--hostname ceph-mon$mon \
--ssh-inject root:file:$ssh_pub_key \
--copy-in $tmp_dir/ceph-mon$mon/ifcfg-eth0:/etc/sysconfig/network-scripts/ \
--copy-in $tmp_dir/ceph-mon$mon/ifcfg-eth1:/etc/sysconfig/network-scripts/ \
--selinux-relabel
echo "Defining ceph-mon$mon.$dns_domain"
virt-install --name ceph-mon$mon.$dns_domain \
--virt-type kvm \
--memory 4096 \
--vcpus 4 \
--boot hd,menu=on \
--disk path=$image_dir/ceph-mon$mon.$dns_domain.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--network network:ceph-replication \
--graphics spice \
--noautoconsole
count=$(( $count + 1 ))
fi
done
echo "Building OSD T1 drives"
count=1
for i in `seq -w 01 04`; do
check=$(virsh list --all | grep ceph-t1-osd$i.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "0" ]]; then
echo "ceph-t1-osd$i.$dns_domain already exists"
count=$(( $count + 1 ))
else
echo "Creating eth0 ifcfg file"
mkdir -p $tmp_dir/ceph-t1-osd$i
cat <<EOF > $tmp_dir/ceph-t1-osd$i/ifcfg-eth0
TYPE=Ethernet
NAME=eth0
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.44.20.3$count
NETMASK=255.255.255.0
GATEWAY=10.44.20.1
DNS1=10.44.20.1
ONBOOT=yes
DEFROUTE=yes
EOF
echo "Creating eth1 ifcfg file"
cat <<EOF > $tmp_dir/ceph-t1-osd$i/ifcfg-eth1
TYPE=Ethernet
NAME=eth1
DEVICE=eth1
BOOTPROTO=static
IPADDR=172.16.20.3$count
NETMASK=255.255.255.0
EOF
echo "Starting ceph-t1-osd$i"
echo "Creating $image_dir/ceph-t1-osd$i.$dns_domain.qcow2 at $os_drive_size"
qemu-img create -f qcow2 $image_dir/ceph-t1-osd$i.$dns_domain.qcow2 $os_drive_size
for c in {1..4}; do
qemu-img create -f qcow2 $image_dir/ceph-t1-osd$i-disk$c.$dns_domain.qcow2 5G
done
echo "Resizing base OS image"
virt-resize --expand /dev/sda1 $base_os_img $image_dir/ceph-t1-osd$i.$dns_domain.qcow2
echo "Customising OS for ceph-t1-osd$i"
virt-customize -a $image_dir/ceph-t1-osd$i.$dns_domain.qcow2 \
--root-password password:$root_password \
--uninstall cloud-init \
--hostname ceph-t1-osd$i \
--ssh-inject root:file:$ssh_pub_key \
--copy-in $tmp_dir/ceph-t1-osd$i/ifcfg-eth0:/etc/sysconfig/network-scripts/ \
--copy-in $tmp_dir/ceph-t1-osd$i/ifcfg-eth1:/etc/sysconfig/network-scripts/ \
--selinux-relabel
echo "Defining ceph-t1-osd$i"
virt-install --name ceph-t1-osd$i.$dns_domain \
--virt-type kvm \
--memory 8192 \
--vcpus 4 \
--boot hd,menu=on \
--disk path=$image_dir/ceph-t1-osd$i.$dns_domain.qcow2,device=disk \
--disk path=$image_dir/ceph-t1-osd$i-disk1.$dns_domain.qcow2,device=disk \
--disk path=$image_dir/ceph-t1-osd$i-disk2.$dns_domain.qcow2,device=disk \
--disk path=$image_dir/ceph-t1-osd$i-disk3.$dns_domain.qcow2,device=disk \
--disk path=$image_dir/ceph-t1-osd$i-disk4.$dns_domain.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--network network:ceph-replication \
--graphics spice \
--noautoconsole
count=$(( $count + 1 ))
fi
done
echo "Building OSD T2 drives"
count=1
for i in `seq -w 01 04`; do
check=$(virsh list --all | grep ceph-t2-osd$i.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "0" ]]; then
echo "ceph-t2-osd$i.$dns_domain already exists"
count=$(( $count + 1 ))
else
echo "Creating eth0 ifcfg file"
mkdir -p $tmp_dir/ceph-t2-osd$i
cat <<EOF > $tmp_dir/ceph-t2-osd$i/ifcfg-eth0
TYPE=Ethernet
NAME=eth0
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.44.20.4$count
NETMASK=255.255.255.0
GATEWAY=10.44.20.1
DNS1=10.44.20.1
ONBOOT=yes
DEFROUTE=yes
EOF
echo "Creating eth1 ifcfg file"
cat <<EOF > $tmp_dir/ceph-t2-osd$i/ifcfg-eth1
TYPE=Ethernet
NAME=eth1
DEVICE=eth1
BOOTPROTO=static
IPADDR=172.16.20.4$count
NETMASK=255.255.255.0
EOF
echo "Starting ceph-t2-osd$i"
echo "Creating $image_dir/ceph-t2-osd$i.$dns_domain.qcow2 at $os_drive_size"
qemu-img create -f qcow2 $image_dir/ceph-t2-osd$i.$dns_domain.qcow2 $os_drive_size
for c in {1..4}; do
qemu-img create -f qcow2 $image_dir/ceph-t2-osd$i-disk$c.$dns_domain.qcow2 10G
done
echo "Resizing base OS image"
virt-resize --expand /dev/sda1 $base_os_img $image_dir/ceph-t2-osd$i.$dns_domain.qcow2
echo "Customising OS for ceph-t2-osd$i"
virt-customize -a $image_dir/ceph-t2-osd$i.$dns_domain.qcow2 \
--root-password password:$root_password \
--uninstall cloud-init \
--hostname ceph-t2-osd$i \
--ssh-inject root:file:$ssh_pub_key \
--copy-in $tmp_dir/ceph-t2-osd$i/ifcfg-eth0:/etc/sysconfig/network-scripts/ \
--copy-in $tmp_dir/ceph-t2-osd$i/ifcfg-eth1:/etc/sysconfig/network-scripts/ \
--selinux-relabel
echo "Defining ceph-t2-osd$i"
virt-install --name ceph-t2-osd$i.$dns_domain \
--virt-type kvm \
--memory 8192 \
--vcpus 4 \
--boot hd,menu=on \
--disk path=$image_dir/ceph-t2-osd$i.$dns_domain.qcow2,device=disk \
--disk path=$image_dir/ceph-t2-osd$i-disk1.$dns_domain.qcow2,device=disk \
--disk path=$image_dir/ceph-t2-osd$i-disk2.$dns_domain.qcow2,device=disk \
--disk path=$image_dir/ceph-t2-osd$i-disk3.$dns_domain.qcow2,device=disk \
--disk path=$image_dir/ceph-t2-osd$i-disk4.$dns_domain.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--network network:ceph-replication \
--graphics spice \
--noautoconsole
count=$(( $count + 1 ))
fi
done
## Build extra ceph nodes if defines
# If rgw is "yes"
if [[ $rgw == "yes" ]]; then
count=1
for rgw in `seq -w 01 02`; do
check=$(virsh list --all | grep ceph-rgw$rgw.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "0" ]]; then
echo "ceph-rgw$rgw.$dns_domain already exists"
count=$(( $count + 1 ))
else
echo "Creating eth0 ifcfg file"
mkdir -p $tmp_dir/ceph-rgw$rgw
cat <<EOF > $tmp_dir/ceph-rgw$rgw/ifcfg-eth0
TYPE=Ethernet
NAME=eth0
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.44.20.11$count
NETMASK=255.255.255.0
GATEWAY=10.44.20.1
DNS1=10.44.20.1
ONBOOT=yes
DEFROUTE=yes
EOF
echo "Starting ceph-rgw$rgw"
echo "Creating $image_dir/ceph-rgw$rgw.$dns_domain.qcow2 at $os_drive_size"
qemu-img create -f qcow2 $image_dir/ceph-rgw$rgw.$dns_domain.qcow2 $os_drive_size
echo "Resizing base OS image"
virt-resize --expand /dev/sda1 $base_os_img $image_dir/ceph-rgw$rgw.$dns_domain.qcow2
echo "Customising OS for ceph-rgw$rgw"
virt-customize -a $image_dir/ceph-rgw$rgw.$dns_domain.qcow2 \
--root-password password:$root_password \
--uninstall cloud-init \
--hostname ceph-rgw$rgw \
--ssh-inject root:file:$ssh_pub_key \
--copy-in $tmp_dir/ceph-rgw$rgw/ifcfg-eth0:/etc/sysconfig/network-scripts/ \
--selinux-relabel
echo "Defining ceph-rgw$rgw.$dns_domain"
virt-install --name ceph-rgw$rgw.$dns_domain \
--virt-type kvm \
--memory 4096 \
--vcpus 2 \
--boot hd,menu=on \
--disk path=$image_dir/ceph-rgw$rgw.$dns_domain.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--graphics spice \
--noautoconsole
count=$(( $count + 1 ))
fi
done
fi
# If mds set to "yes"
if [[ $mds == "yes" ]]; then
count=1
for mds in `seq -w 01 02`; do
check=$(virsh list --all | grep ceph-mds$mds.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "0" ]]; then
echo "ceph-mds$mds.$dns_domain already exists"
count=$(( $count + 1 ))
else
echo "Creating eth0 ifcfg file"
mkdir -p $tmp_dir/ceph-mds$mds
cat <<EOF > $tmp_dir/ceph-mds$mds/ifcfg-eth0
TYPE=Ethernet
NAME=eth0
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.44.20.12$count
NETMASK=255.255.255.0
GATEWAY=10.44.20.1
DNS1=10.44.20.1
ONBOOT=yes
DEFROUTE=yes
EOF
echo "Starting ceph-mds$mds"
echo "Creating $image_dir/ceph-mds$mds.$dns_domain.qcow2 at $os_drive_size"
qemu-img create -f qcow2 $image_dir/ceph-mds$mds.$dns_domain.qcow2 $os_drive_size
echo "Resizing base OS image"
virt-resize --expand /dev/sda1 $base_os_img $image_dir/ceph-mds$mds.$dns_domain.qcow2
echo "Customising OS for ceph-mds$mds"
virt-customize -a $image_dir/ceph-mds$mds.$dns_domain.qcow2 \
--root-password password:$root_password \
--uninstall cloud-init \
--hostname ceph-mds$mds \
--ssh-inject root:file:$ssh_pub_key \
--copy-in $tmp_dir/ceph-mds$mds/ifcfg-eth0:/etc/sysconfig/network-scripts/ \
--selinux-relabel
echo "Defining ceph-mds$mds.$dns_domain"
virt-install --name ceph-mds$mds.$dns_domain \
--virt-type kvm \
--memory 8192 \
--vcpus 4 \
--boot hd,menu=on \
--disk path=$image_dir/ceph-mds$mds.$dns_domain.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--graphics spice \
--noautoconsole
count=$(( $count + 1 ))
fi
done
fi
# If iscsi set to "yes"
if [[ $iscsi == "yes" ]]; then
count=1
for iscsi in `seq -w 01 02`; do
check=$(virsh list --all | grep ceph-iscsi$iscsi.$dns_domain > /dev/null && echo "0" || echo "1" )
if [[ $check == "0" ]]; then
echo "ceph-iscsi$iscsi.$dns_domain already exists"
count=$(( $count + 1 ))
else
echo "Creating eth0 ifcfg file"
mkdir -p $tmp_dir/ceph-iscsi$iscsi
cat <<EOF > $tmp_dir/ceph-iscsi$iscsi/ifcfg-eth0
TYPE=Ethernet
NAME=eth0
DEVICE=eth0
BOOTPROTO=static
IPADDR=10.44.20.13$count
NETMASK=255.255.255.0
GATEWAY=10.44.20.1
DNS1=10.44.20.1
ONBOOT=yes
DEFROUTE=yes
EOF
echo "Starting ceph-iscsi$iscsi"
echo "Creating $image_dir/ceph-iscsi$iscsi.$dns_domain.qcow2 at $os_drive_size"
qemu-img create -f qcow2 $image_dir/ceph-iscsi$iscsi.$dns_domain.qcow2 $os_drive_size
echo "Resizing base OS image"
virt-resize --expand /dev/sda1 $base_os_img $image_dir/ceph-iscsi$iscsi.$dns_domain.qcow2
echo "Customising OS for ceph-iscsi$iscsi"
virt-customize -a $image_dir/ceph-iscsi$iscsi.$dns_domain.qcow2 \
--root-password password:$root_password \
--uninstall cloud-init \
--hostname ceph-iscsi$iscsi \
--ssh-inject root:file:$ssh_pub_key \
--copy-in $tmp_dir/ceph-iscsi$iscsi/ifcfg-eth0:/etc/sysconfig/network-scripts/ \
--selinux-relabel
echo "Defining ceph-iscsi$iscsi.$dns_domain"
virt-install --name ceph-iscsi$iscsi.$dns_domain \
--virt-type kvm \
--memory 8192 \
--vcpus 4 \
--boot hd,menu=on \
--disk path=$image_dir/ceph-iscsi$iscsi.$dns_domain.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:ceph-presentation \
--graphics spice \
--noautoconsole
count=$(( $count + 1 ))
fi
done
fi
# Print running VMs
virsh list
Demo⌗
Adding More Disks⌗
If there’s a capacity or a need to add some more drives to the OSD nodes this example will add more drives to the OSD VMs
{f..g}
will add 2 more drives to dev/vdf
and /def/vdg
. Change this to add more.
# path=/var/lib/libvirt/images/
# size=50G
# server=$(virsh list | grep osd | awk '{print $2}')
for s in $server; do \
d=1; \
for l in {f..g}; do \
qemu-img create -f qcow2 $path/$s-extradisk$d.qcow2 $size; \
virsh attach-disk $s $path/$s-extradisk$d.qcow2 vd$l --subdriver qcow2 --persistent; \
d=$(( $d + 1 )); \
done; \
done
Cleanup⌗
Cleanup bash script to remove all the parts of the Ceph lab
Scripts can also be found on GitHub
#!/bin/bash
# Node building vars
image_dir="/var/lib/libvirt/images"
base_os_img="/var/lib/libvirt/images/iso/CentOS-Stream-GenericCloud-8-20201217.0.x86_64.qcow2"
ssh_pub_key="/root/.ssh/id_ed25519.pub"
# Network Vars
dns_domain="ceph.lab"
# Extra Vars
root_password="password"
os_drive_size="40G"
tmp_dir="/tmp"
##### Start #####
# Destroy & Undefine all nodes
echo "Destroy & Undefine all nodes"
virsh destroy bastion.$dns_domain
virsh destroy grafana.$dns_domain
virsh undefine bastion.$dns_domain --remove-all-storage
virsh undefine grafana.$dns_domain --remove-all-storage
echo "Removing Monitor nodes"
for mon in `seq -w 01 03`; do
virsh destroy ceph-mon$mon.$dns_domain
virsh undefine ceph-mon$mon.$dns_domain --remove-all-storage
done
echo "Removing OSD nodes"
for i in `seq -w 01 04`; do
virsh destroy ceph-t1-osd$i.$dns_domain
virsh destroy ceph-t2-osd$i.$dns_domain
virsh undefine ceph-t1-osd$i.$dns_domain --remove-all-storage
virsh undefine ceph-t2-osd$i.$dns_domain --remove-all-storage
done
echo "Removing other nodes"
for i in `seq -w 01 02`; do
virsh destroy ceph-rgw$i.$dns_domain
virsh destroy ceph-mds$i.$dns_domain
virsh destroy ceph-iscsi$i.$dns_domain
virsh undefine ceph-rgw$i.$dns_domain --remove-all-storage
virsh undefine ceph-mds$i.$dns_domain --remove-all-storage
virsh undefine ceph-iscsi$i.$dns_domain --remove-all-storage
done
# Remove ifcfg files
echo "Removing monitor ifcfg files"
for mon in `seq -w 01 03`; do
rm $tmp_dir/ceph-mon$mon -rf
rm $tmp_dir/ceph-mds$mon -rf
rm $tmp_dir/ceph-iscsi$mon -rf
rm $tmp_dir/ceph-rgw$mon -rf
done
echo "Removing OSD ifcfg files"
for t in 1 2; do
for i in `seq -w 01 04`; do
rm $tmp_dir/ceph-t$t-osd$i -rf
done
done
# Remove Network files
echo "Removing ceph-presentation xml file"
rm $tmp_dir/ceph-presentation.xml -rf
echo "Removing ceph-replicaton xml file"
echo "Removing ceph networks in libvirt"
for network in ceph-presentation ceph-replication; do
virsh net-destroy $network
virsh net-undefine $network
done
Ceph Install⌗
Requirements⌗
This guide will use cephadm
so requires these steps before starting
- Podman or Docker
- Chrony or NTP
- LVM2
Example Ansible inventory file for confirming and setting up requirements.
---
ceph:
hosts:
bastion.ceph.lab:
ansible_host: 10.44.20.90
grafana.ceph.lab:
ansible_host: 10.44.20.16
ceph-mon01.ceph.lab:
ansible_host: 10.44.20.21
ceph-mon02.ceph.lab:
ansible_host: 10.44.20.22
ceph-mon03.ceph.lab:
ansible_host: 10.44.20.23
ceph-t1-osd01.ceph.lab:
ansible_host: 10.44.20.31
ceph-t1-osd02.ceph.lab:
ansible_host: 10.44.20.32
ceph-t1-osd03.ceph.lab:
ansible_host: 10.44.20.33
ceph-t1-osd04.ceph.lab:
ansible_host: 10.44.20.34
ceph-t2-osd01.ceph.lab:
ansible_host: 10.44.20.41
ceph-t2-osd02.ceph.lab:
ansible_host: 10.44.20.42
ceph-t2-osd03.ceph.lab:
ansible_host: 10.44.20.43
ceph-t2-osd04.ceph.lab:
ansible_host: 10.44.20.44
ceph-rgw01.ceph.lab:
ansible_host: 10.44.20.111
ceph-rgw02.ceph.lab:
ansible_host: 10.44.20.112
- Enable and start chronyd service
# ansible -i inventory.yaml all -m service -a "name=chronyd state=started enabled=true"
- Confirm chrony is working
# ansible -i inventory.yaml all -m shell -a "chronyc tracking"
- Install podman
# ansible -i inventory.yaml all -m package -a "name=podman state=installed"
- Ensure python3 is installed
# ansible -i inventory.yaml all -m package -a "name=python3 state=installed"
Requirements Playbook⌗
---
- name: Ceph Lab Requirements
hosts: all
gather_facts: false
tasks:
- name: Ensure packages are installed
package:
name: "{{ item }}"
state: installed
loop:
- podman
- chrony
- python3
- lvm2
- name: Start and enable chronyd
service:
name: chronyd
state: started
enabled: true
- name: Build hosts file
lineinfile:
dest: /etc/hosts
regexp: '.*{{ item }}$'
line: "{{ hostvars[item]['ansible_host'] }} {{ hostvars[item]['inventory_hostname_short'] }} {{item}}"
state: present
with_items: "{{ groups['all'] }}"
Cephadm⌗
Following steps are run from the ceph-mon01.ceph.lab
node
This section is described in detail on the Ceph Docs Site
- Get the latest version of
cephadm
# curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
- Make
cephadm
executable
# chmod +x cephadm
- Install
cephadm
# ./cephadm add-repo --release octopus
# ./cephadm install
# which cephadm
/usr/sbin/cephadm
# cephadm install ceph-common
Bootstrap a New Cluster⌗
- Ensure
/etc/ceph
exists on the first node.
# mkdir -p /etc/ceph
- Then bootstrap the first monitor node. Full details of what this does can be found here. This will skip the monitoring stack as it will be deployed later to it’s own node.
# cephadm bootstrap --mon-ip 10.44.20.21 --skip-monitoring-stack
- Demo
- Confirm that the
ceph
command works from the first monitor node
# ceph -v
ceph version 15.2.8 (bdf3eebcd22d7d0b3dd4d5501bee5bac354d5b55) octopus (stable)
# ceph status
cluster:
id: 5af2c430-5198-11eb-94c6-525400613ffc
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 1 daemons, quorum ceph-mon01.ceph.lab (age 11m)
mgr: ceph-mon01.ceph.lab.lszgjg(active, since 9m)
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
- Set the cluster_network CIDR in ceph config
# ceph config set global cluster_network 172.16.20.0/24
- Reconfigure the daemons
# ceph orch daemon reconfig mon.ceph-mon01
Scheduled to reconfig mon.ceph-mon01 on host 'ceph-mon01'
- For this lab, the number of monitors will be set to 3 and set the monitor network
# ceph config set mon public_network 10.44.20.0/24
# ceph orch apply mon 3
Scheduled mon update...
- Copy Ceph’s ssh public key to the new node’s root user’s authorized_keys file
# ssh-copy-id -f -i /etc/ceph/ceph.pub root@10.44.20.22
Or with Ansible
---
- name: Fetch bootstraped ssh key
hosts: ceph-mon01.ceph.lab
gather_facts: false
tasks:
- name: Grab ceph.pub key and store it
fetch:
src: /etc/ceph/ceph.pub
dest: /tmp/ceph.pub
flat: yes
- name: Add key to all nodes
hosts: all
gather_facts: false
tasks:
- name: Ensure key is in roots authorized_key file
authorized_key:
user: root
key: "{{ lookup('file', '/tmp/ceph.pub') }}"
state: present
Add Monitor Nodes to the Cluster⌗
- Add the 2 remaining monitors to the cluster
# ceph orch host add ceph-mon02
Added host 'ceph-mon02'
# ceph orch host add ceph-mon03
Added host 'ceph-mon03'
# ceph orch host label add ceph-mon01 mon
Added label mon to host ceph-mon01
# ceph orch host label add ceph-mon02 mon
Added label mon to host ceph-mon02
# ceph orch host label add ceph-mon03 mon
Added label mon to host ceph-mon03
# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-mon01.ceph.lab ceph-mon01.ceph.lab mon
ceph-mon02.ceph.lab ceph-mon02.ceph.lab mon
ceph-mon03.ceph.lab ceph-mon03.ceph.lab mon
- Tell cephadm to deploy monitor daemons to nodes matching the
mon
label.
# ceph orch apply mon label:mon
Scheduled mon update...
- Ceph status should now show 3 monitors in the cluster
# ceph -s
cluster:
id: 5af2c430-5198-11eb-94c6-525400613ffc
health: HEALTH_WARN
OSD count 0 < osd_pool_default_size 3
services:
mon: 3 daemons, quorum ceph-mon01.ceph.lab,ceph-mon03,ceph-mon02 (age 3m)
mgr: ceph-mon01.ceph.lab.lszgjg(active, since 7m), standbys: ceph-mon02.nmvtji
osd: 0 osds: 0 up, 0 in
data:
pools: 0 pools, 0 pgs
objects: 0 objects, 0 B
usage: 0 B used, 0 B / 0 B avail
pgs:
Add OSD Nodes to the Cluster⌗
- Add the nodes into the cluster
# for i in $(grep osd /etc/hosts | awk '{print $2}'); do ceph orch host add $i; done
Added host 'ceph-t1-osd01'
Added host 'ceph-t1-osd02'
Added host 'ceph-t1-osd03'
Added host 'ceph-t1-osd04'
Added host 'ceph-t2-osd01'
Added host 'ceph-t2-osd02'
Added host 'ceph-t2-osd03'
Added host 'ceph-t2-osd04'
# for i in $(grep osd /etc/hosts | awk '{print $2}'); do ceph orch host label add $i osd; done
Added label osd to host ceph-t1-osd01
Added label osd to host ceph-t1-osd02
Added label osd to host ceph-t1-osd03
Added label osd to host ceph-t1-osd04
Added label osd to host ceph-t2-osd01
Added label osd to host ceph-t2-osd02
Added label osd to host ceph-t2-osd03
Added label osd to host ceph-t2-osd04
# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-mon01.ceph.lab ceph-mon01.ceph.lab mon
ceph-mon02.ceph.lab ceph-mon02.ceph.lab mon
ceph-mon03.ceph.lab ceph-mon03.ceph.lab mon
ceph-t1-osd01.ceph.lab ceph-t1-osd01.ceph.lab osd
ceph-t1-osd02.ceph.lab ceph-t1-osd02.ceph.lab osd
ceph-t1-osd03.ceph.lab ceph-t1-osd03.ceph.lab osd
ceph-t1-osd04.ceph.lab ceph-t1-osd04.ceph.lab osd
ceph-t2-osd01.ceph.lab ceph-t2-osd01.ceph.lab osd
ceph-t2-osd02.ceph.lab ceph-t2-osd02.ceph.lab osd
ceph-t2-osd03.ceph.lab ceph-t2-osd03.ceph.lab osd
ceph-t2-osd04.ceph.lab ceph-t2-osd04.ceph.lab osd
- Check
ceph orch
can see all the available devices (This may take a few minutes)
# ceph orch device ls
Hostname Path Type Serial Size Health Ident Fault Available
ceph-t1-osd01.ceph.lab /dev/vdb hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd01.ceph.lab /dev/vdc hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd01.ceph.lab /dev/vdd hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd01.ceph.lab /dev/vde hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd02.ceph.lab /dev/vdb hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd02.ceph.lab /dev/vdc hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd02.ceph.lab /dev/vdd hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd02.ceph.lab /dev/vde hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd03.ceph.lab /dev/vdb hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd03.ceph.lab /dev/vdc hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd03.ceph.lab /dev/vdd hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd03.ceph.lab /dev/vde hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd04.ceph.lab /dev/vdb hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd04.ceph.lab /dev/vdc hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd04.ceph.lab /dev/vdd hdd 5368M Unknown N/A N/A Yes
ceph-t1-osd04.ceph.lab /dev/vde hdd 5368M Unknown N/A N/A Yes
ceph-t2-osd01.ceph.lab /dev/vdb hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd01.ceph.lab /dev/vdc hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd01.ceph.lab /dev/vdd hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd01.ceph.lab /dev/vde hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd02.ceph.lab /dev/vdb hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd02.ceph.lab /dev/vdc hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd02.ceph.lab /dev/vdd hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd02.ceph.lab /dev/vde hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd03.ceph.lab /dev/vdb hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd03.ceph.lab /dev/vdc hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd03.ceph.lab /dev/vdd hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd03.ceph.lab /dev/vde hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd04.ceph.lab /dev/vdb hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd04.ceph.lab /dev/vdc hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd04.ceph.lab /dev/vdd hdd 10.7G Unknown N/A N/A Yes
ceph-t2-osd04.ceph.lab /dev/vde hdd 10.7G Unknown N/A N/A Yes
- Finally tell Ceph to consume any available and unused storage device
# ceph orch apply osd --all-available-devices
Scheduled osd.all-available-devices update...
- This is should then be visable by watching Ceph status. This process can take some time.
# ceph status
cluster:
id: 5af2c430-5198-11eb-94c6-525400613ffc
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon01.ceph.lab,ceph-mon03,ceph-mon02 (age 47s)
mgr: ceph-mon01.ceph.lab.lszgjg(active, since 39m)
osd: 32 osds: 32 up (since 9m), 32 in (since 9m)
data:
pools: 1 pools, 1 pgs
objects: 0 objects, 0 B
usage: 32 GiB used, 208 GiB / 240 GiB avail
pgs: 1 active+clean
Add Rados Gateway⌗
For other daemon types, Ceph docs details how to configure them.
- Add and label the Rados Gateway hosts
# ceph orch host add ceph-rgw01
Added host 'ceph-rgw01'
# ceph orch host add ceph-rgw02
Added host 'ceph-rgw02'
# ceph orch host label add ceph-rgw01 rgw
Added label rgw to host ceph-rgw01
# ceph orch host label add ceph-rgw02 rgw
Added label rgw to host ceph-rgw02
# ceph orch host ls
HOST ADDR LABELS STATUS
ceph-mon01.ceph.lab ceph-mon01.ceph.lab mon
ceph-mon02.ceph.lab ceph-mon02.ceph.lab mon
ceph-mon03.ceph.lab ceph-mon03.ceph.lab mon
ceph-rgw01.ceph.lab ceph-rgw01.ceph.lab rgw
ceph-rgw02.ceph.lab ceph-rgw02.ceph.lab rgw
ceph-t1-osd01.ceph.lab ceph-t1-osd01.ceph.lab osd
ceph-t1-osd02.ceph.lab ceph-t1-osd02.ceph.lab osd
ceph-t1-osd03.ceph.lab ceph-t1-osd03.ceph.lab osd
ceph-t1-osd04.ceph.lab ceph-t1-osd04.ceph.lab osd
ceph-t2-osd01.ceph.lab ceph-t2-osd01.ceph.lab osd
ceph-t2-osd02.ceph.lab ceph-t2-osd02.ceph.lab osd
ceph-t2-osd03.ceph.lab ceph-t2-osd03.ceph.lab osd
ceph-t2-osd04.ceph.lab ceph-t2-osd04.ceph.lab osd
- Configure the RGW and define the nodes the daemon needs to run on.
In this example test
is the realm-name and uk-west
is the zone-name
# ceph orch apply rgw test uk-west --placement="2 label:rgw"
Scheduled rgw.test.uk-west update...
- Once again watching Ceph health will show when this operation has completed.
# ceph status
cluster:
id: 5af2c430-5198-11eb-94c6-525400613ffc
health: HEALTH_OK
services:
mon: 3 daemons, quorum ceph-mon01.ceph.lab,ceph-mon03,ceph-mon02 (age 13m)
mgr: ceph-mon01.ceph.lab.lszgjg(active, since 90m)
osd: 32 osds: 32 up (since 60m), 32 in (since 60m)
rgw: 2 daemons active (test.uk-west.ceph-rgw01.ouliyb, test.uk-west.ceph-rgw02.zbzdaj)
task status:
data:
pools: 5 pools, 105 pgs
objects: 202 objects, 7.2 KiB
usage: 33 GiB used, 207 GiB / 240 GiB avail
pgs: 105 active+clean
Configure Monitoring⌗
If there is a desire to move these daemons to their own node or if --skip-monitoring-stack
was used to bootstrap the cluster.
- Add the grafana node into the Ceph cluster
# ceph orch host add grafana.ceph.lab
Added host 'grafana'
# ceph orch host label add grafana.ceph.lab grafana
Added label grafana to host grafana.ceph.lab
- Remove any pre-installed monitoring (Not required if
--skip-monitoring-stack
was used.
# ceph orch rm grafana
Removed service grafana
# ceph orch rm prometheus --force
Removed service prometheus
# ceph orch rm node-exporter
Removed service node-exporter
# ceph orch rm alertmanager
Removed service alertmanager
# ceph mgr module disable prometheus
- Enable monitoring stack with placement selectors
# ceph mgr module enable prometheus
# ceph orch apply node-exporter '*'
Scheduled node-exporter update...
# ceph orch apply alertmanager 1 --placement="label:grafana"
Scheduled alertmanager update...
# ceph orch apply prometheus 1 --placement="label:grafana"
Scheduled prometheus update...
# ceph orch apply grafana 1 --placement="label:grafana"
Scheduled grafana update...
- Confirm monitoring services are running on the desired node
# podman ps --format "{{.Names}}"
ceph-5af2c430-5198-11eb-94c6-525400613ffc-grafana.grafana
ceph-5af2c430-5198-11eb-94c6-525400613ffc-prometheus.grafana
ceph-5af2c430-5198-11eb-94c6-525400613ffc-alertmanager.grafana
ceph-5af2c430-5198-11eb-94c6-525400613ffc-node-exporter.grafana
ceph-5af2c430-5198-11eb-94c6-525400613ffc-crash.grafana
- Get the Grafana details
# ceph dashboard get-grafana-api-url
https://grafana.ceph.lab:3000
# ceph dashboard get-grafana-api-username
admin
# ceph dashboard get-grafana-api-password
admin
Conclusion⌗
At this point there should be a running ceph cluster at release Octopus with, optionally 2 Rados Gateway nodes and the associated pools. This lab will be the starting point for further ceph labs.
To reset the ceph dashboard admin password⌗
# ceph dashboard ac-user-set-password admin password
Add an S3 user⌗
# radosgw-admin user create --uid=demo --display-name="Demo User"
{
"user_id": "demo",
"display_name": "Demo User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [
{
"user": "demo",
"access_key": "4IMSY2D3RPWW7VB7ECPB",
"secret_key": "7AuFEvU9HKa4WB6BjfuTlZEDv6t1oHKhQ01zmIDo"
}
],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"default_storage_class": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"check_on_raw": false,
"max_size": -1,
"max_size_kb": 0,
"max_objects": -1
},
"temp_url_keys": [],
"type": "rgw",
"mfa_ids": []
}
Cleanup⌗
Cleanup for this lab is detailed in this section which will remove all the VMs and their storage volumes.