Introduction

Openshift Container Platform 4.7.19 with No DNS or DHCP and in a disconnected environment. The biggest constraint to all of this is that no form of DNS can be put onto any dedicated infrastructure. To finalise the deployment, a NSD pod is deployed to the cluster to act as a forward resolver.

In no way is this a guide on how it should be done, just because you can doesn’t mean you should!

Machines

HostIPRole
harbor.dontdo.this10.44.66.10 + DHCP address in 10.44.67.0/24Harbor registry
lb.dontdo.this10.44.66.5Load balancer / bastion
bootstrap.dontdo.this10.44.66.20Bootstrap
control01.dontdo.this10.44.66.21Controller
control02.dontdo.this10.44.66.22Controller
control03.dontdo.this10.44.66.23Controller

Constraints

  • No DNS / DHCP
    • Hosts files
    • Static addressing and host names
  • Disconnected - requires a Harbor registry - no internet here
  • No Load Balancer - required HAProxy / Nginx
  • All running CoreOS (except Harbor instance)
  • OCP version 4.7.19

Synopsis

Infrastructure

The infrastructure is all CoreOS with the exception of the Harbor node.

The Openshift repository is mirrored into Harbor once it is set up, and the modified Ignition files are hosted on the Load Balancer node.

graph TD Infra(Build Infrastructure) -->|Setup Harbor| Harbor{Harbor} Mirror>OCP Mirror] --> Harbor Harbor -->LB("Load Balancer
&
HTTP Server") Harbor --> Bootstrap(Bootstrap Node) Harbor -->|Hosts Container Images| Control01(Control01) Harbor --> Control02(Control02) Harbor -->Control03(Control03) Ignition(Ignition Files) -->|"Added hosts
entries"| LB

Openshift

As no DNS is allowed as standalone infrastructure, for certain elements of the cluster to work (for example the Web console) a forwarer is needed to CoreDNS. This resolves the hosts file entries with an NSD pod.

graph LR Pod((Pod)) --> Core Pod1((Pod)) --> Core Pod2((Pod)) --> Core Core{CoreDNS} -->|Upstream
Forwarder| NSD("NSD
- \*.apps.domain
- harbor.domain") subgraph ClusterDNS Core end subgraph "NSD Namespace" NSD end

Create Infrastructure (Libvirt / KVM)

Create Isolated Network

  • Create the network definition

<network>
<name>dontdo.this</name>
<bridge name="virbr2587"/>
<ip address="10.44.66.1" netmask="255.255.255.0">
</ip>
</network>
  • Create the network

virsh net-create network.xml

Create Nat Network

  • Simply to get Harbor into the platform, a NAT network is needed for this section.

<network>
<name>dontdothis-proxy</name>
<bridge name="virbr7852"/>
<forward mode="nat" />
<ip address="10.44.67.1" netmask="255.255.255.0">
<dhcp>
<range start="10.44.67.2" end="10.44.67.254"/>
</dhcp>
</ip>
</network>
  • Create the proxy network

virsh net-create proxy.net

Create VM Disks

To save time, all the drives will be created up front.


for i in harbor lb bootstrap control01 control02 control03; do
qemu-img create -f qcow2 /var/lib/virt/nvme/$i.dontdo.this.qcow2 120G
done

Harbor

As Harbor is usually existing infrastructure, we’ll just stick this on a CentOS 7 machine.

Create Harbor VM

  • Grab the CentOS Qemu


# Replace as required

curl https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2 -o /var/lib/libvirt/images/CentOS-7-x86_64-GenericCloud.qcow2
  • Configure the OS drive

virt-resize --expand /dev/sda1 \
 /var/lib/libvirt/images/iso/CentOS-7-x86_64-GenericCloud.qcow2 \
 /var/lib/virt/nvme/harbor.dontdo.this.qcow2

virt-customize -a /var/lib/virt/nvme/harbor.dontdo.this.qcow2 \
 --root-password password:d0ddl3 \
 --uninstall cloud-init \
 --hostname harbor.dontdo.this \
 --ssh-inject root:file:/root/.ssh/id_ed25519.pub \
 --selinux-relabel
  • Define and start the Harbor VM

virt-install --name harbor.dontdo.this \
 --virt-type kvm \
 --memory 4096 \
 --vcpus 2 \
 --boot hd,menu=on \
 --disk path=/var/lib/virt/nvme/harbor.dontdo.this.qcow2,device=disk \
 --os-type Linux \
 --os-variant centos7 \
 --network network:dontdo.this \
 --network network:dontdothis-proxy \
 --graphics spice \
 --noautoconsole
  • Check you can log into the Console

virsh list
  • Example output
    
      Id Name State
    
    ---
    
    4 harbor.dontdo.this running
    
    virsh console 4
    Connected to domain 'harbor.dontdo.this'
    Escape character is ^]
    

Setup Harbor

Setup and configure Harbor

  • Bring up eth11 external interface

echo -e 'DEVICE="eth1"\nBOOTPROTO="dhcp"\nONBOOT="yes"\nTYPE="Ethernet"\nIPV6INIT="no"' > /etc/sysconfig/network-scripts/ifcfg-eth1

ifup eth1
  • Install perquisites

yum install -y yum-utils epel-release

yum-config-manager \
 --add-repo \
 https://download.docker.com/linux/centos/docker-ce.repo

yum install docker-ce docker-ce-cli containerd.io docker-compose -y
  • Grab the latest Harbor release from Github for this it is v2.2.3

curl https://github.com/goharbor/harbor/releases/download/v2.2.3/harbor-online-installer-v2.2.3.tgz -o harbor.tgz
  • Unpack the latest Harbor release

tar zxvf harbor.tgz
  • Generate a CA and certificates for Harbor

openssl genrsa -out ca.key 4096

openssl req -x509 -new -nodes -sha512 -days 3650 \
 -subj "/C=CN/ST=GB/L=GB/O=example/OU=Personal/CN=harbor.dontdo.this" \
 -key ca.key \
 -out ca.crt

openssl genrsa -out harbor.dontdo.this.key 4096

openssl req -sha512 -new \
 -subj "/C=CN/ST=GB/L=GB/O=example/OU=Personal/CN=harbor.dontdo.this" \
 -key harbor.dontdo.this.key \
 -out harbor.dontdo.this.csr

cat > v3.ext <<-EOF
authorityKeyIdentifier=keyid,issuer
basicConstraints=CA:FALSE
keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[alt_names]
DNS.1=harbor.dontdo.this
DNS.2=harbor
DNS.3=10.44.66.10
EOF

openssl x509 -req -sha512 -days 3650 \
 -extfile v3.ext \
 -CA ca.crt -CAkey ca.key -CAcreateserial \
 -in harbor.dontdo.this.csr \
 -out harbor.dontdo.this.crt  
  • Copy over the generated certs to a folder on the Harbor host

mkdir -p /data/cert
cp harbor.dontdo.this.crt /data/cert/
cp harbor.dontdo.this.key /data/cert/
  • Add the certs to docker

mkdir -p /etc/docker/certs.d/harbor.dontdo.this
cp harbor.dontdo.this.crt /etc/docker/certs.d/harbor.dontdo.this/
cp harbor.dontdo.this.key /etc/docker/certs.d/harbor.dontdo.this/
cp ca.crt /etc/docker/certs.d/harbor.dontdo.this/
  • Restart docker engine

systemctl restart docker
  • Configure harbor.yml for installation

cp harbor/harbor.yml.tmpl harbor/harbor.yml
  • Edit harbor.yml to set the host name, cert and key for SSL.

hostname: harbor.dontdo.this
https:
port: 443
certificate: /data/cert/harbor.dontdo.this.crt
private_key: /data/cert/harbor.dontdo.this.key
  • Now setup Harbor

./harbor/prepare
  • Bring up the internal interface

echo -e 'DEVICE="eth0"\nBOOTPROTO=static\nONBOOT="yes"\nTYPE="Ethernet"\nPREFIX=24\nIPADDR=10.44.66.10\nIPV6INIT="no"\n' > /etc/sysconfig/network-scripts/ifcfg-eth0

ifup eth0
  • And start Harbor

cd harbor

docker-compose up -d

Load Balancer

As this is not part of the provided infrastructure, this will have to CoreOS. For this reason, the Lab box has host file entries for Harbor and tinyproxy running to allow web browser access. The required images will have to be put into Harbor for the load balancer.

Build HAProxy container

Easiest to do on the KVM host that has access to the restricted network


cat <<EOF > Dockerfile
FROM haproxy:2.3
RUN mkdir -p /var/lib/haproxy
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg
EOF

cat <<EOF > haproxy.cfg
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
stats socket /var/lib/haproxy/stats

defaults
mode tcp
log global
option tcplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

## Add stats

listen stats
bind \*:9090
stats enable
stats uri /stats
stats auth admin:admin
stats refresh 30s

## Frontend API

frontend api_frontend
bind \*:6443
mode tcp
use_backend api

## Frontend Machine Controller

frontend machine_controller_frontend
bind \*:22623
mode tcp
use_backend machine_controller

## HTTP Traffic

frontend http_frontend
bind \*:80
mode tcp
use_backend http

## HTTPS traffic

frontend https_frontend
bind \*:443
mode tcp
use_backend https

#### Backends

backend api
balance leastconn
option tcp-check
server bootstrap 10.44.66.20:6443 check
server control01 10.44.66.21:6443 check
server control02 10.44.66.22:6443 check
server control03 10.44.66.23:6443 check

backend machine_controller
balance leastconn
option tcp-check
server bootstrap 10.44.66.20:22623 check
server control01 10.44.66.21:22623 check
server control02 10.44.66.22:22623 check
server control03 10.44.66.23:22623 check

backend http
balance leastconn
option tcp-check
server bootstrap 10.44.66.20:80 check
server control01 10.44.66.21:80 check
server control02 10.44.66.22:80 check
server control03 10.44.66.23:80 check

backend https
balance leastconn
option tcp-check
server bootstrap 10.44.66.20:443 check
server control01 10.44.66.21:443 check
server control02 10.44.66.22:443 check
server control03 10.44.66.23:443 check
EOF
  • Check the config works

podman build . -t haproxy-test

podman run -it --rm \
 --name haproxy-syntax-check\
 haproxy-test haproxy -c -f /usr/local/etc/haproxy/haproxy.cfg

If this returns no errors, then tag the image for the Harbor registry. In these examples, a project has been created called utility-containers.

  • Tag the image

podman build . -t harbor.dontdo.this/utility-containers/haproxy:latest
  • Push the image to Harbor. There may be SSL errors here if the CA created earlier has not been imported into the host.

podman push harbor.dontdo.this/utility-containers/haproxy

Add HTTP container to Harbor


podman pull httpd

podman tag <httpd image id> harbor.dontdo.this/utility-containers/httpd

podman push harbor.dontdo.this/utility-containers/httpd:latest

Build LB VM

With the latest CoreOS ISO downloaded, we will now build the VM to host the LB and the ignition files required by OCP.

  • Create the ignition file:

{  
 "ignition": {  
 "version": "3.2.0"  
 },  
 "passwd": {  
 "users": [  
 {  
 "name": "core",  
 "sshAuthorizedKeys": [
 "ssh-ed25519 <your public key here>"
]  
 }  
 ]  
 },  
 "storage": {  
 "files": [
 {
 "path": "/etc/hostname",
 "contents": {
 "source": "data:,lb.dontdo.this"
 },
 "mode": 420
 },
{
 "overwrite": true,
 "path": "/etc/hosts",
"user": {
 "name": "root"
},
"contents": {
"source": "data:text/plain;charset=utf8;base64,MTAuNDQuNjYuNSBsYi5kb250ZG8udGhpcwoxMC40NC42Ni4xMCBoYXJib3IuZG9udGRvLnRoaXMK"
},
"mode": 384
},
{
"path": "/etc/pki/ca-trust/source/anchors/harbor-ca.pem",
"contents": {
"source": "data:text/plain;charset=utf8;base64,<base64 of the harbor CA cert goes here>"
},
"mode": 420
}
]
},
"systemd": {
"units": [
{
"contents": "[Unit]\nDescription=HAProxy Service\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nTimeoutStartSec=0\nExecStartPre=-/bin/podman kill haproxy\nExecStartPre=-/bin/podman rm haproxy\nExecStartPre=/bin/podman pull harbor.dontdo.this/utility-containers/haproxy\nExecStart=/bin/podman run -p 80:80 -p 443:443 -p 6443:6443 -p 22623:22623 -p 9090:9090 --name haproxy --sysctl net.ipv4.ip_unprivileged_port_start=0 harbor.dontdo.this/utility-containers/haproxy\n\n[Install]\nWantedBy=multi-user.target\n",
"enabled": true,
"name": "haproxy.service"
},
{
"contents": "[Unit]\nDescription=HTTPD Service\nAfter=network-online.target\nWants=network-online.target\n\n[Service]\nTimeoutStartSec=0\nExecStartPre=-/bin/podman kill httpd\nExecStartPre=-/bin/podman rm httpd\nExecStartPre=/bin/podman pull harbor.dontdo.this/utility-containers/httpd\nExecStart=/bin/podman run -p 8080:80 --name httpd harbor.dontdo.this/utility-containers/httpd\n\n[Install]\nWantedBy=multi-user.target\n",
"enabled": true,
"name": "httpd.service"
}
]
}
}
  • Create ignition disk image

mkdir tmp

cp lb.ign tmp/

virt-make-fs --format=qcow2 --type=ext4 tmp /var/lib/virt/nvme/lb-ign.dontdo.this.qcow2
  • Define the VM

virt-install --name lb.dontdo.this \
 --virt-type kvm \
 --memory 4096 \
 --vcpus 2 \
 --cdrom /var/lib/libvirt/images/iso/rhcos-live.x86_64.iso \
 --boot hd,menu=on \
 --disk path=/var/lib/virt/nvme/lb.dontdo.this.qcow2,device=disk \
 --disk path=/var/lib/virt/nvme/lb-ign.dontdo.this.qcow2,device=disk \
 --os-type Linux \
 --os-variant fedora-coreos-stable \
 --network network:dontdo.this \
 --graphics spice \
 --noautoconsole
  • From the console, mount the ignition disk

mount /dev/vdb /mnt
  • Setup the static networking

nmcli con mod "Wired Connection 1" \
 ipv4.address 10.44.66.5/24 \
 ipv4.gateway 10.44.66.1 \
 ipv4.method manual

nmcli con up "Wired Connection 1"

Check that Harbor can be pinged once the interface is up.

  • Install CoreOS

coreos-installer install /dev/vda --copy-network --ignition-file /var/mnt/lb.ignition

Once the install has completed, reboot the machine. You should now be able to SSH to 10.44.66.5 as the user core and if you have a web proxy set up into this network, access the stats page here.

Openshift 4.7.19

In this section, the disconnected environment should be ready for an attempt to install Openshift 4.7 with no DNS or DHCP

No Worries
No Worries

Mirror the images

Starting on the KVM host as it has access to both the Harbor registry and the internet to mirror the images.


curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-client-linux.tar.gz -o oc.tar.gz

tar -zxvf oc.tar.gz

mv oc /usr/local/bin/

mv kubectl /usr/local/bin/

oc version
Client Version: 4.7.19

The next steps are following along from here

  • Grab your pull secret from cloud.redhat and add it to pull-secret.txt

  • Base64 your Harbor credentials for the pull-secret.


echo -n '<user_name>:<password>' | base64 -w0
  • Make a human readable copy of the pull secret

cat ./pull-secret.txt | jq . > pull-secret-pretty.json
  • Add the Harbor credentials to this file:

{
"auths": {
"harbor.dontdo.this": {
"auth": "<your base64 here>",
"email": "<your email here>"
},
"cloud.openshift.com": { ......
}
}
}
  • Export the release version

export OCP_RELEASE=4.7.19
  • Export your local mirror

export LOCAL_REGISTRY=harbor.dontdo.this
  • Export the local repository

export LOCAL_REPOSITORY='openshift/ocp4'
  • Export the name of the repository to mirror

export PRODUCT_REPO='openshift-release-dev'
  • Export the path to the edited copy of the pull secret

export LOCAL_SECRET_JSON=pull-secret-edit.json
  • Export the release mirror

export RELEASE_NAME="ocp-release"
  • Export the architecture

export ARCHITECTURE=x86_64
  • Run the mirror (remove --dry-run when you are happy it is all right)

oc adm release mirror -a ${LOCAL_SECRET_JSON}  \
     --from=quay.io/${PRODUCT_REPO}/${RELEASE_NAME}:${OCP_RELEASE}-${ARCHITECTURE} \
     --to=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY} \
     --to-release-image=${LOCAL_REGISTRY}/${LOCAL_REPOSITORY}:${OCP_RELEASE}-${ARCHITECTURE} --dry-run

All the tags should now be in Harbor ready for use.

Harbor Mirror Repo
Harbor Mirror Repo

Openshift install-config.yaml

The install-config.yaml required


apiVersion: v1
baseDomain: this
compute:

- name: worker
  replicas: 0
  controlPlane:
  name: master
  replicas: 3
  metadata:
  name: dontdo
  networking:
  clusterNetwork:
  - cidr: 10.128.0.0/14
    hostPrefix: 23
    networkType: OpenShiftSDN
    serviceNetwork:
  - 172.30.0.0/16
    platform:
    none: {}
    fips: false
    pullSecret: '{"auths":{"harbor.dontdo.this": {"auth": "<your base64 auth>","email": "123@123.com"}}}'
    sshKey: 'ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILkHWxN2uZZK8in6yY6JnEKgHqkAJ8jysFU3Xuer8UTY'
    additionalTrustBundle: |
    -----BEGIN CERTIFICATE-----
    MIIFpTCCA42gAwIBAgIJAMul6QmiRAgrMA0GCSqGSIb3DQEBDQUAMGkxCzAJBgNV
    BAYTAkNOMQswCQYDVQQIDAJHQjELMAkGA1UEBwwCR0IxEDAOBgNVBAoMB2V4YW1w
    bGUxETAPBgNVBAsMCFBlcnNvbmFsMRswGQYDVQQDDBJoYXJib3IuZG9udGRvLnRo
    aXMwHhcNMjEwNzIyMDkxNjAxWhcNMzEwNzIwMDkxNjAxWjBpMQswCQYDVQQGEwJD
    TjELMAkGA1UECAwCR0IxCzAJBgNVBAcMAkdCMRAwDgYDVQQKDAdleGFtcGxlMREw
    DwYDVQQLDAhQZXJzb25hbDEbMBkGA1UEAwwSaGFyYm9yLmRvbnRkby50aGlzMIIC
    IjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAoxGQ1xumIC/Qo6M9kOGNfL9d
    6UawMV77cMuE7wXTTfaWi9DCuUig+0/8v/BQGVp9Yh4ARX4p8pEuvHjbK1OtB9jt
    vsm5IhQrMXMkJtA4+2Hfq7v1TTmeEU5BVId6doTPlLT0MY8kxt94MK41pJVcG0bV
    Jgsfb38fcgRWc9U3gWjAXv35/TOObQIl/toj+QcWelTABBmUf4ZMVXNTI28LS7LV
    CRORax4MkhBN8HIpNCQ67oXsPZj/aVK4J06VtuYHG/yxNHzfaSu6BU4O4ryEG9Nz
    MIKMJylgwXpVGWiBcHeWiBbzT7GliAeTzWyAIRTRE4i+kyYrtRPpcBK0LmiAsSiD
    JPrSafoB3XEdjket5oxS6G0QdbBifpK2oJheoy9EzCfKAjdvd6ZTpwiFysXxyKXf
    V6mSsDjsFmWDwp/sx3L2SBhBwvQUT68CwYULguDDhCjOJzkVeyZ46REF0OBPXPde
    +6V8S62dt+WpUzAiQ4T1BazVFIpjfDaoJ6TNkCR4opy/k7cNFmQW3ZB4rXrwljJm
    1G6NMlSdtkTVGWurPPxcyLhCW/beGxhmaqpzdM3kcOUXPjBnMwUTFPsk4wVYKQDV
    LUUOTv6s/RLT5WtCil7LNuDEOoyoYNsWrmU8A52MfGpBojjfU9OetMZ3kAkqaBsj
    1H3WaeCpYELA8iUNtvcCAwEAAaNQME4wHQYDVR0OBBYEFM9YoLVG8/hldXrRckI1
    GlT3JHG7MB8GA1UdIwQYMBaAFM9YoLVG8/hldXrRckI1GlT3JHG7MAwGA1UdEwQF
    MAMBAf8wDQYJKoZIhvcNAQENBQADggIBABuW/uH8dW9jL2avf73H8c7HdTf4J44M
    sE7o+gi3OVJuwU8h5AsX5vxr2yyQb1N9JQbBnFUw2J8kpAWr16iXaEsQ+m9SAifq
    u7MjqHIRWASlkzHdv4TIClxKwnm/lNhSK91D5BNAXD3YtcjiPnDcQKP3dAj/DMkl
    TItAMKeCQ7D/QcbrsnGqxZLy4qiLLk+x7IHbN9DK9CCRQozCbjjbL237aotSazuw
    Rekujxfuw+hScemY39o978NXamRwSEMte1O3HOoorfUzf3ksB1srPOlTW/5vYH/T
    /HuUiC21tSiVqWCkLITYRXB/fST2df6STWbtqES2UTPnh2Md1gfPCqKctzAtkytQ
    8pNDfKdtLcXN9ucbrzN59wg7xZqSe8o5BMn9FF41hllwhbpvW/nqFbLVxbMu1S8t
    fSzULIaNkwCZChygYY3+Uw6jj8pfuNb7nSHNnkASbR+JK1QkP2jE6jnY8bzte332
    2JlcIGxAsdTsCccMLm9l/VvL0gm3hiW/0lft6wu9pVnW+r+UQtS0Z5IDG/gwzdNF
    osx0CcIv9pu1iBbqbevBByEwp84lg2wJyYiV747OvtzJlY3JaVqbuFBLe+y5NEg/
    o944zFV88wMM7WGATTMDSg5wu/ZHAbwcaaZwCT9xKpKDg+uleUf0zp44eQYDNSQ3
    KtvVl95AwmVU
    -----END CERTIFICATE-----
    imageContentSources:
- mirrors:
  - harbor.dontdo.this/openshift/ocp4
    source: quay.io/openshift-release-dev/ocp-release
- mirrors:
  - harbor.dontdo.this/openshift/ocp4
    source: registry.svc.ci.openshift.org/ocp/release
- mirrors:

  - harbor.dontdo.this/openshift/ocp4
    source: quay.io/openshift-release-dev/ocp-v4.0-art-dev
    
  • Create an install directory and copy install-config.yaml to it

mkdir install

cp install-config.yaml install/
  • Download the installer

curl https://mirror.openshift.com/pub/openshift-v4/clients/ocp/stable/openshift-install-linux.tar.gz -o openshift-installer.tar.gz

tar -zxvf openshift-installer.tar.gz

mv openshift-install /usr/local/bin
  • Create the required manifests and ignition files

openshift-install create manifests --dir=install/

openshift-install create ignition-configs --dir=install/

Modify Ignition files

As there is no DNS or DHCP in this environment :cowboy: we need to add a hosts file to the ignition configs.

This is the hosts file that will be used:


127.0.0.1 localhost
10.44.66.10 harbor.dontdo.this
10.44.66.5 api.dontdo.this api-int.dontdo.this oauth-openshift.apps.dontdo.this console-openshift-console.apps.dontdo.this downloads-openshift-console.apps.dontdo.this canary-openshift-ingress-canary.apps.dontdo.this alertmanager-main-openshift-monitoring.apps.dontdo.this grafana-openshift-monitoring.apps.dontdo.this prometheus-k8s-openshift-monitoring.apps.dontdo.this thanos-querier-openshift-monitoring.apps.dontdo.this
10.44.66.20 bootstrap.dontdo.this
10.44.66.21 control01.dontdo.this
10.44.66.22 control02.dontdo.this
10.44.66.23 control03.dontdo.this
  • Base64 the hosts file

cat hosts | base64 -w0
  • Example output

    
      MTI3LjAuMC4xIGxvY2FsaG9zdAoxMC40NC42Ni4xMCBoYXJib3IuZG9udGRvLnRoaXMKMTAuNDQuNjYuNSBhcGkuZG9udGRvLnRoaXMgYXBpLWludC5kb250ZG8udGhpcyBvYXV0aC1vcGVuc2hpZnQuYXBwcy5kb250ZG8udGhpcyBjb25zb2xlLW9wZW5zaGlmdC1jb25zb2xlLmFwcHMuZG9udGRvLnRoaXMgZG93bmxvYWRzLW9wZW5zaGlmdC1jb25zb2xlLmFwcHMuZG9udGRvLnRoaXMgY2FuYXJ5LW9wZW5zaGlmdC1pbmdyZXNzLWNhbmFyeS5hcHBzLmRvbnRkby50aGlzIGFsZXJ0bWFuYWdlci1tYWluLW9wZW5zaGlmdC1tb25pdG9yaW5nLmFwcHMuZG9udGRvLnRoaXMgZ3JhZmFuYS1vcGVuc2hpZnQtbW9uaXRvcmluZy5hcHBzLmRvbnRkby50aGlzIHByb21ldGhldXMtazhzLW9wZW5zaGlmdC1tb25pdG9yaW5nLmFwcHMuZG9udGRvLnRoaXMgdGhhbm9zLXF1ZXJpZXItb3BlbnNoaWZ0LW1vbml0b3JpbmcuYXBwcy5kb250ZG8udGhpcyAKMTAuNDQuNjYuMjAgYm9vdHN0cmFwLmRvbnRkby50aGlzCjEwLjQ0LjY2LjIxIGNvbnRyb2wwMS5kb250ZG8udGhpcwoxMC40NC42Ni4yMiBjb250cm9sMDIuZG9udGRvLnRoaXMKMTAuNDQuNjYuMjMgY29udHJvbDAzLmRvbnRkby50aGlzCg==
      

  • Create a copy of bootstrap.ign that’s a bit easier to read


cat bootstrap.ign | jq . > bootstrap-edit.ign
  • In the storage.files array add the base64’d hosts file for the bootstrap node

{
"storage": {
"files": [
{
"overwrite": true,
"path": "/etc/hosts",
"user": {
"name": "root"
},
"contents": {
"source": "data:text/plain;charset=utf-8;base64,<your base64 hosts file"
},
"mode": 384
}
]
}
}

  • Copy the edited ignition file to the HTTP pod on the lb

scp bootstrap-edit.ign core@10.44.66.5:~
  • Then from the LB

podman cp /var/home/core/bootstrap-edit.ign 9c8786c4a074:/usr/local/apache2/htdocs/bootstrap.ign
  • Check you can now curl the ignition files

curl localhost:8080/bootstrap.ign

Bootstrap

Now we need to build the bootstrap machine and configure the network and host name


virt-install --name bootstrap.dontdo.this \
 --virt-type kvm \
 --memory 4096 \
 --vcpus 2 \
 --cdrom /var/lib/libvirt/images/iso/rhcos-live.x86_64.iso \
 --boot hd,menu=on \
 --disk path=/var/lib/virt/nvme/bootstrap.dontdo.this.qcow2,device=disk \
 --os-type Linux \
 --os-variant fedora-coreos-stable \
 --network network:dontdo.this \
 --graphics spice \
 --noautoconsole
  • From the console set the network

nmcli con mod "Wired connection 1" \
 ipv4.address 10.44.66.20/24 \
 ipv4.gateway 10.44.66.1 \
 ipv4.method manual

nmcli con up "Wired connection 1"
  • Check you can ping the lb or the Harbor nodes

  • Set the host name


nmcli general hostname bootstrap.dontdo.this
  • Install CoreOS (It may be worth confirming you can curl the ignition URL before running this)

coreos-installer install /dev/vda \
 --copy-network \
 --ignition-url http://10.44.66.5:8080/bootstrap.ign \
 --insecure-ignition

At this point you should be able to ssh to the bootstrap and see the processes starting


journalctl -b -f -u release-image.service -u bootkube.service

You should also see the HAProxy back end come up for the machine_controller

Controllers

Now that the bootstrap node is up, we can build the 3 controllers.

First we need to get the hosted ignition file from the bootstrap node and make a few changes to allow the controller nodes to be configured.

  • Check the master.ign that was created for the URL to the hosted ignition

cat install/master.ign | jq .
{
"ignition": {
"config": {
"merge": [
{
"source": "https://api-int.dontdo.this:22623/config/master"
}
]
},
"security": {
...
},
"version": "3.2.0"
}
  • Grab the ignition config from the API endpoint (You may require host entries on the KVM machine for this)

curl -k https://api-int.dontdo.this:22623/config/master | jq . > master-config-edit.ign
  • Add the hosts file block to this ignition

"storage": {
"files": [
{
"overwrite": true,
"path": "/etc/hosts",
"user": {
"name": "root"
},
"contents": {
"source": "data:text/plain;charset=utf-8;base64,MTI3LjAuMC4xIGxvY2FsaG9zdAoxMC40NC42Ni4xMCBoYXJib3IuZG9udGRvLnRoaXMKMTAuNDQuNjYuNSBhcGkuZG9udGRvLnRoaXMgYXBpLWludC5kb250ZG8udGhpcyBvYXV0aC1vcGVuc2hpZnQuYXBwcy5kb250ZG8udGhpcyBjb25zb2xlLW9wZW5zaGlmdC1jb25zb2xlLmFwcHMuZG9udGRvLnRoaXMgZG93bmxvYWRzLW9wZW5zaGlmdC1jb25zb2xlLmFwcHMuZG9udGRvLnRoaXMgY2FuYXJ5LW9wZW5zaGlmdC1pbmdyZXNzLWNhbmFyeS5hcHBzLmRvbnRkby50aGlzIGFsZXJ0bWFuYWdlci1tYWluLW9wZW5zaGlmdC1tb25pdG9yaW5nLmFwcHMuZG9udGRvLnRoaXMgZ3JhZmFuYS1vcGVuc2hpZnQtbW9uaXRvcmluZy5hcHBzLmRvbnRkby50aGlzIHByb21ldGhldXMtazhzLW9wZW5zaGlmdC1tb25pdG9yaW5nLmFwcHMuZG9udGRvLnRoaXMgdGhhbm9zLXF1ZXJpZXItb3BlbnNoaWZ0LW1vbml0b3JpbmcuYXBwcy5kb250ZG8udGhpcyAKMTAuNDQuNjYuMjAgYm9vdHN0cmFwLmRvbnRkby50aGlzCjEwLjQ0LjY2LjIxIGNvbnRyb2wwMS5kb250ZG8udGhpcwoxMC40NC42Ni4yMiBjb250cm9sMDIuZG9udGRvLnRoaXMKMTAuNDQuNjYuMjMgY29udHJvbDAzLmRvbnRkby50aGlzCg=="
},
"mode": 384
}
]
}
  • Change the ignition.version string to 3.2.0 inside this file.

{
"ignition": {
"version": "2.2.0"
}
}
  • Copy the edited master ignition to your HTTP server

scp install/master-config-edit.ign core@10.44.66.5:~
  • Then from the LB

podman cp /var/home/core/master-config-edit.ign 9c8786c4a074:/usr/local/apache2/htdocs/master.ign
  • Create the 3 controller VMs

for i in `seq -w 01 03`; do
virt-install --name control$i.dontdo.this \
  --virt-type kvm \
  --memory 16384 \
  --vcpus 8 \
  --cdrom /var/lib/libvirt/images/iso/rhcos-live.x86_64.iso \
  --boot hd,menu=on \
  --disk path=/var/lib/virt/nvme/control$i.dontdo.this.qcow2,device=disk \
 --os-type Linux \
 --os-variant fedora-coreos-stable \
 --network network:dontdo.this \
 --graphics spice \
 --noautoconsole
done
  • From the console of each, configure the network, host name and install CoreOS.

Example from control01



nmcli con mod "Wired connection 1" ipv4.address 10.44.66.21/24 ipv4.gateway 10.44.66.1 ipv4.method manual
nmcli con up "Wired connection 1"
nmcli general hostname control01.dontdo.this
coreos-installer install /dev/vda --copy-network --ignition-url http://10.44.66.5:8080/master.ign --insecure-ignition
  • Reboot the nodes, then watch the services on the bootstrap node and the stats page of HAProxy

Wait for Cluster to Build

At this point the cluster bootstrap will continue. The best places to watch this are from the bootstrap node with:


journalctl -b -f -u release-image.service -u bootkube.service

And the HAProxy stats page, waiting for all the services to come up.

Once they are up, you should be able to access your cluster. The KVM host should have access if you copy the install/auth/kubeconfig to ~/.kube/config


oc get nodes
  • Example output
    
      NAME STATUS ROLES AGE VERSION
      control01.dontdo.this Ready master,worker 35m v1.20.0+87cc9a4
      control02.dontdo.this Ready master,worker 35m v1.20.0+87cc9a4
      control03.dontdo.this Ready master,worker 35m v1.20.0+87cc9a4
      

DNS

Unfortunately the lack of any DNS means the console pods will be crashing. To fix this, we need to add a resolver for *.apps.dontdo.this and dontdo.this

Create User

First we’re going to add a user to the cluster to give us access to the container registry.

  • Create User

htpasswd -c -B -b htpasswd nsd Password
  • Create the htaccess secret

oc create secret generic htpass-secret --from-file=htpasswd=htpasswd -n openshift-config
  • Create HTPasswd CR

apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:

- name: htpasswd
  mappingMethod: claim
  type: HTPasswd
  htpasswd:
  fileData:
  name: htpass-secret
  
  • Add the identity identity provider.

oc apply -f htpass-cr.yaml
  • Grab the CA certs for the cluster

echo "" | openssl s_client -showcerts -servername api.dontdo.this -connect api.dontdo.this:6443 > cert.pem
echo "" | openssl s_client -showcerts -servername oauth-openshift.apps.dontdo.this -connect oauth-openshift.apps.dontdo.this:443 > oauth_cert.pem
  • Edit the cert.pem and oauth_cert.pem to just contain the certificate data

  • Copy *.pem to /etc/pki/ca-trust/source/anchors/

  • Update CA trust


update-ca-trust
  • Check login

oc login https://api.dontdo.this:6443 -u nsd -p Password
  • Create Project

oc new-project nsd

NSD Container

  • Dockerfile

FROM alpine:latest
RUN apk update && apk add nsd
COPY nsd.conf /etc/nsd/nsd.conf
COPY apps.dontdo.this.zone /etc/nsd/apps.dontdo.this.zone
COPY dontdo.this.zone /etc/nsd/dontdo.this.zone
EXPOSE 5353
ENTRYPOINT nsd -c /etc/nsd/nsd.conf -d
  • nsd.conf

server:
port: 5353
server-count: 1
ip4-only: yes
hide-version: yes
identity: ""
zonesdir: "/etc/nsd"

zone:
name: apps.dontdo.this
zonefile: apps.dontdo.this.zone
zone:
name: dontdo.this
zonefile: dontdo.this.zone
  • apps.dontdo.this.zone

$ORIGIN apps.dontdo.this.
$TTL 86400

@ IN SOA ns1.alpinelinux.org. webmaster.apps.dontdo.this. (
2011100501 ; serial
28800 ; refresh
7200 ; retry
86400 ; expire
86400 ; min TTL
)

-       IN      A       10.44.66.5
  
  • dontdo.this.zone

$ORIGIN dontdo.this.
$TTL 86400

@ IN SOA ns1.alpinelinux.org. webmaster.dontdo.this. (
2011100501 ; serial
28800 ; refresh
7200 ; retry
86400 ; expire
86400 ; min TTL
)
harbor IN A 10.44.66.10
  • Build the image

podman build .

Expose OCP Registry

  • Set managementState: to Managed

oc edit configs.imageregistry/cluster
  • Patch the registry to be Empty Dir

oc patch configs.imageregistry.operator.openshift.io cluster \
 --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
  • Expose the registry

oc patch configs.imageregistry.operator.openshift.io/cluster \
 --patch '{"spec":{"defaultRoute":true}}' --type=merge
  • Create NSD Project

oc new-project nsd
  • Create image stream

oc create is nsd
  • Tag the NSD container

podman build . -t \
 default-route-openshift-image-registry.apps.dontdo.this:443/nsd/nsd:latest
  • Login to the local registry

podman login -u `oc whoami` -p `oc whoami --show-token` \
 default-route-openshift-image-registry.apps.dontdo.this:443
  • Push the image:

podman push default-route-openshift-image-registry.apps.dontdo.this:443/nsd/nsd

Create NSD

  • Add default service account to anyuid SCC

oc edit scc anyuid
  • Add - system:serviceaccount:nsd:default to the array of users

  • Create NSD


oc new-app --name nsd --image-stream=nsd
  • Expose 5353/udp in the service

oc edit service nsd
  • Modify the service to expose UDP

ports:

- name: 5353-udp
  port: 5353
  protocol: UDP
  targetPort: 5353
  

Add CoreDNS Forwarder

To do this, you will need to switch back to an admin user oc login -u system:admin

  • Grab the service IP for the NSD instance

oc -n nsd get service
  • Add a forwarder to the DNS operator

oc edit dns.operator/default
  • Add the NSD pod as a forwarder

spec:
servers: - name: nsd
zones: - dontdo.this - apps.dontdo.this
forwardPlugin:
upstreams: - 172.30.241.197:5353
  • Confirm the additions have been added to the CoreDNS configMap.

oc get configmap/dns-default -n openshift-dns -o yaml
  • Example output
    
      apiVersion: v1
      data:
      Corefile: | # nsd
      dontdo.this:5353 apps.dontdo.this:5353 {
      forward . 172.30.241.197:5353
      errors
      bufsize 512
      }
      .:5353 {
      bufsize 512
      errors
      health {
      lameduck 20s
      }
      ready
      kubernetes cluster.local in-addr.arpa ip6.arpa {
      pods insecure
      upstream
      fallthrough in-addr.arpa ip6.arpa
      }  
       prometheus 127.0.0.1:9153
      forward . /etc/resolv.conf {
      policy sequential
      }
      

Deploy Simple App

Add Harbor as an insecure registry

  • Add Harbor CA configMap to openshift-config

oc -n openshift-config create \
 configmap harbor-ca --from-file=ca.crt
  • Edit the Image Config custom resource

oc edit image.config.openshift.io/cluster
  • Add Harbor to spec:

spec:
additionalTrustedCA:
name: harbor-ca
allowedRegistriesForImport:

- domainName: harbor.dontdo.this
  insecure: false
  
  • Get hello-openshift to the Harbor registry

podman pull openshift/hello-openshift

podman tag 7af3297a3fb4 harbor.dontdo.this/utility-containers/hello-openshift:latest

podman push harbor.dontdo.this/utility-containers/hello-openshift:latest
  • New project

oc new-project hello-openshift
  • Deploy hello-openshift from Harbor

oc new-app --name hello \
 --docker-image harbor.dontdo.this/utility-containers/hello-openshift:latest


oc get pods
NAME READY STATUS RESTARTS AGE
hello-7b45b9464-b6dmm 1/1 Running 0 6s

Conclusion

All in all, this is a really bad idea. But for a POC in the most restrictive environment I can think of, this should be enough to at least prove what the platform is capable of and why it’s worth spending the time to put the DNS infrastructure in place for it.

Just because you can, doesn’t mean you should

Sums up this lab rather nicely!