Ansible Tower with Multiple Workers
Contents
Introduction⌗
This lab sets out to build a HA Ansible Tower setup on top of KVM. It will also cover scaling up the worker nodes to meet demand and replacing a node should one fail.
Infrastructure Setup⌗
- An example XML for the tower network. This can be copied into
tower-net.xml
<network>
<name>tower</name>
<bridge name="virbr6542"/>
<forward mode="nat"/>
<domain name="tower.lab"/>
<ip address="10.44.54.1" netmask="255.255.255.0">
<dhcp>
<range start="10.44.54.10" end="10.44.54.100"/>
</dhcp>
</ip>
</network>
- Define the Tower Network
virsh net-define tower-net.xml
- Start Network
virsh net-start tower
- Create VM disks
for i in db 01 02 lb node01; do
qemu-img create -f qcow2 /var/lib/virt/nvme/tower-$i.qcow2 40G
done
- Resize RHEL8 image into the OS disks
for i in db 01 02 lb node01; do
virt-resize --expand /dev/sda1 \
/var/lib/libvirt/images/iso/rhel-8.3-x86_64-kvm.qcow2 \
/var/lib/virt/nvme/tower-$i.qcow2
done
- Customise VMs
for i in db 01 02 lb node01; do
virt-customize -a /var/lib/virt/nvme/tower-$i.qcow2 \
--root-password password:password \
--uninstall cloud-init \
--hostname tower-$i.tower.lab \
--ssh-inject root:file:/root/.ssh/id_ed25519.pub \
--selinux-relabel
done
- Define VMs
for i in db 01 02 lb node01; do
virt-install --name tower-$i.tower.lab \
--virt-type kvm \
--memory 4096 \
--vcpus 2 \
--boot hd,menu=on \
--disk path=/var/lib/virt/nvme/tower-$i.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:tower \
--graphics spice \
--noautoconsole
done
Tower Pre Reqs⌗
- Create basic
tower-inv.ini
to help speed up initial setup
The IP’s for the hosts can be found with:
virsh net-dhcp-leases tower
[tower_lab]
tower-db.tower.lab ansible_ssh_host=10.44.54.34
tower-01.tower.lab ansible_ssh_host=10.44.54.44
tower-02.tower.lab ansible_ssh_host=10.44.54.60
tower-lb.tower.lab ansible_ssh_host=10.44.54.50
tower-node01.tower.lab ansible_ssh_host=10.44.54.23
- Check Ansible can reach all the hosts
ansible -i tower-inv.ini all -m ping
- Subscribe all nodes to get updates
ansible -i tower-inv.ini all \
-m redhat_subscription \
-a "state=present username=<your-email> password=<your password> auto_attach=true"
PostgreSQL Install⌗
On the tower-db
host
- Install PostgreSQL packages
dnf install @postgresql -y
- Setup PostgreSQL initdb
postgresql-setup --initdb
- Set the password for the postgres user
passwd postgres
- Start PostgreSQL services
systemctl enable postgresql
systemctl start postgresql
PostgreSQL Setup⌗
- Su to the postgres user
su - postgres
- Setup a tower user
createuser --interactive --pwprompt
Example
Enter name of role to add: tower Enter password for new role: Enter it again: Shall the new role be a superuser? (y/n) n Shall the new role be allowed to create databases? (y/n) y Shall the new role be allowed to create more new roles? (y/n) y
Edit
/var/lib/pgsql/data/postgresql.conf
listen_addresses = '\*'
- Edit
/var/lib/pgsql/data/pg_hba.conf
- add these two lines to the bottom of the file
host all all 0.0.0.0/0 md5
host all all ::/0 md5
- Restart PostgreSQL
systemctl restart postgresql
Install Tower⌗
- Setup an inventory file
[tower]
tower-01.tower.lab ansible_ssh_host=10.44.54.44
tower-02.tower.lab ansible_ssh_host=10.44.54.60
[database]
tower-db.tower.lab ansible_ssh_host=10.44.54.34
[all:vars]
admin_password='password'
pg_password='password'
pg_host='tower-db.tower.lab'
pg_port='5432'
pg_database='awx'
pg_username='tower'
- Download the installation tar:
curl \
https://releases.ansible.com/ansible-tower/setup/ansible-tower-setup-latest.tar.gz \
-o tower-setup.tar.gz
- Unpack the installation file
tar -zxvf tower-setup.tar.gz
Copy the inventory file into the ansible-tower-setup directory
Modify
roles/preflight/defaults/main.yml
required_ram: 3500
- Run the tower installer with the new inventory
./setup.sh -i tower-setup.ini
Setup LB⌗
The LB will be using HAProxy to load requests between the two Ansible tower hosts.
- Install HAProxy
dnf install haproxy -y
- Create
haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats
defaults
mode tcp
log global
option tcplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000
### Add stats
listen stats
bind \*:9090
mode http
stats enable
stats uri /stats
stats auth admin:admin
stats refresh 30s
### HTTPS traffic
frontend https_frontend
bind \*:443
mode tcp
use_backend https
#### Backends
backend https
balance leastconn
option tcp-check
server tower-01 tower-01.tower.lab:443 check
server tower-02 tower-02.tower.lab:443 check
Overwrite
/etc/haproxy/haproxy.cfg
with this fileStart and enable haproxy
systemctl enable haproxy
systemctl start haproxy
Loosing a Tower Node⌗
Destroy a Node⌗
- Destroy
tower-01
virsh destroy tower-01.tower.lab
virsh undefine tower-01.tower.lab --remove-all-storage
- Confirm node is unavailable in tower
- Remove node from Tower Instances
awx-manage remove_from_queue --hostname=10.44.54.99 --queuename=tower
Build New Tower Node⌗
- Create disk for tower-03
qemu-img create -f qcow2 /var/lib/virt/nvme/tower-03.qcow2 40G
- Expand RHEL into the disk
virt-resize --expand /dev/sda1 \
/var/lib/libvirt/images/iso/rhel-8.3-x86_64-kvm.qcow2 \
/var/lib/virt/nvme/tower-03.qcow2
- Customise image
virt-customize -a /var/lib/virt/nvme/tower-03.qcow2 \
--root-password password:password \
--uninstall cloud-init \
--hostname tower-03.tower.lab \
--ssh-inject root:file:/root/.ssh/id_ed25519.pub \
--selinux-relabel
- Build server
virt-install --name tower-03.tower.lab \
--virt-type kvm \
--memory 4096 \
--vcpus 2 \
--boot hd,menu=on \
--disk path=/var/lib/virt/nvme/tower-03.qcow2,device=disk \
--os-type Linux \
--os-variant centos7 \
--network network:tower \
--graphics spice \
--noautoconsole
- Grab the node IP from
virsh net-dhcp-leases tower
Expiry Time MAC address Protocol IP address Hostname
---
2021-11-04 11:55:11 52:54:00:07:02:c2 ipv4 10.44.54.34/24 tower-db
2021-11-04 11:58:34 52:54:00:24:77:82 ipv4 10.44.54.50/24 tower
2021-11-04 12:00:18 52:54:00:b8:dc:7e ipv4 10.44.54.94/24 tower-03
2021-11-04 11:30:30 52:54:00:bd:be:5e ipv4 10.44.54.44/24 tower-01
2021-11-04 11:55:54 52:54:00:d8:d9:ec ipv4 10.44.54.23/24 tower-node01
2021-11-04 11:56:08 52:54:00:e1:f8:6a ipv4 10.44.54.60/24 tower-02
- Add the node to the inventory
[new_tower]
tower-03.tower.lab ansible_ssh_host=10.44.54.94
[tower_lab]
tower-db.tower.lab ansible_ssh_host=10.44.54.34
tower-01.tower.lab ansible_ssh_host=10.44.54.44
tower-02.tower.lab ansible_ssh_host=10.44.54.60
tower-lb.tower.lab ansible_ssh_host=10.44.54.50
tower-node01.tower.lab ansible_ssh_host=10.44.54.23
- Register the node with Red Hat.
ansible -i tower-inv.ini new_tower \
-m redhat_subscription \
-a "state=present username=<your-email> password=<your password> auto_attach=true"
Manually Defining Secret Key⌗
There are two ways of adding a node back into the cluster. This way, we don’t run any Ansible on the remaining tower node. In the event ALL worker nodes are lost, this will be the way to rebuild them to the existing database. If at least one node remains, the secret key can be slurped from this node using Ansible. You will need the contents of
/etc/tower/SECRET_KEY
to be stored somewhere safe incase all Tower nodes are lost.
- Modify the inventory file for the setup script.
Here we will remove the other tower nodes, leaving just the database host and details along with the new node. We will also add the var
secret_key_override
to the inventory with the secret key from the other nodes to allow it to participate in the cluster. The secret_key can be found in/etc/tower/SECRET_KEY
on the nodes.
[tower]
tower-03.tower.lab ansible_ssh_host=10.44.54.94
# tower-01.tower.lab ansible_ssh_host=10.44.54.44
# tower-02.tower.lab ansible_ssh_host=10.44.54.60
[database]
tower-db.tower.lab ansible_ssh_host=10.44.54.34
[all:vars]
secret_key_override='4Q2oDtd2Hazdbdel0k/S9lxIRMFpdK+vLsgI23ThV1nb'
admin_password='password'
pg_password='password'
pg_host='tower-db.tower.lab'
pg_port='5432'
pg_database='awx'
pg_username='tower'
- Run setup again with the new inventory
./setup.sh -i tower-setup.ini
Automatically Grab Secret Key⌗
- Modify the inventory file for the setup script.
Make sure that the new node is NOT first in the tower group, and any unavailable nodes are removed from the inventory. The SECRET_KEY will be slurped from the first node in this group.
[tower]
# tower-01.tower.lab ansible_ssh_host=10.44.54.44
tower-02.tower.lab ansible_ssh_host=10.44.54.60
tower-03.tower.lab ansible_ssh_host=10.44.54.94
[database]
tower-db.tower.lab ansible_ssh_host=10.44.54.34
[all:vars]
admin_password='password'
pg_password='password'
pg_host='tower-db.tower.lab'
pg_port='5432'
pg_database='awx'
pg_username='tower'
- Run setup again with the new inventory
./setup.sh -i tower-setup.ini
Add to the Load Balancer⌗
- Add the new host into the load balancer
# vim /etc/haproxy/haproxy.cfg
---
#### Backends
backend https
balance leastconn
option tcp-check
# server tower-01 tower-01.tower.lab:443 check
server tower-02 tower-02.tower.lab:443 check
server tower-03 tower-03.tower.lab:443 check
- Reload haprpoxy
systemctl reload haproxy
Summary⌗
This lab should leave you with a 2 node Ansible Tower cluster, with a seperate database node. There is a standalone machine set up as part of this lab to allow playbooks to be run against it in a testing scenario. Backups are not covered in this lab, but should you wish to setup backups for this environment, this page details how this can be done.