Adventures in Code

Minimal Kubernetes applications

Minimal Kubernetes applications for the homelabber

Kubernetes provides a standardised api which with the primitives needed to deploy run container software. Having worked in SilverStripe platform team who were responsible for a proprietary deployment and orchestration software I have been following the progress around kubernetes for a long time. It would appear to provide a ready made solution for the operation and deployment of software.

first deployment

there are significant challenges implementing a kubernetes deployment

  1. Drivers

kubernetes provides out of the box tools for managing processing workloads (called pods) however when deploying on bare metal an administrator has to provide their own storage and networking software through the use of kubernetes api operators (or drivers)

  1. learning curve

kubernetes is an abstraction layer overtop of linux and container runtimes learning linux and container runtimes first is essential to even beginning to understand how to run software in pods after that hurdle is passed now the would-be administrator is faced with an onslaught of terminology and software plugins which can intimidate and put you off starting.

my entrance point to kubernetes was k3s

the quickstart guide is very simple

it comes with klipper and local-storage which provide the minimum workload (pod) requirements for networking and storage.

My current setup uses

application managment

my experience with a small homelab cluster was that helm deployments advertised what i wanted but prevented me from learning how to write my own deployments. I would often deploy via helm and have the application spew logs that I had done it wrong with just the defaults

it’s not a great administrator experience

kompose

often self hosted applications come with docker-compose yaml configrations

kompose is able to translate those (somewhat poorly) into kubernetes yaml

With the heavy lifting done from there I can shuffle around the components into an application folder and deploy with kubectl apply -f example-application/

Once I have a working application I can then add networking service definitions and link to the bastion reverse proxy host in catalyst cloud.

adding a pull policy for updates means no worries about patching

Wireguard

Implementing wireguard manually

To allow more fine grained control of my network I have decided to implement wireguard as a site to site VPN.

Prior to this I was using tailscale to advertise routes from catalyst cloud into my homelab LAN. The issue I experienced was the traffic going all the way to sydney to Tailscales relay server. Part of the featureset for tailscale is to perform NAT punchthrough and that requires a public endpoint to provide coordination.

Most companies host in sydney because it has cloud regions for the big three vendors (google, microsoft, amazon) but that increases the charges for my hosting because international bandwidth is more expensive than national bandwidth.

To remove the relay server problem I decided to try building a wireguard tunnel with networkmanager

Factors that allowed me to implement wireguard

  • Both of my machines are running Fedora CoreOS with network manager
  • the existing subnet router is responsible for the tunnel
  • I have a public static ip which I can use to allowlist on the catalyst cloud security group
  • port forwarding rules translate my public address to the internal address of the subnet router
  • the main node is the prodesk G2 hosting both the kubernetes api server and wireguard tunnel

I’ve added a specific network to transit between catalyst and home 10.9.9.0/24. That allows me to assign gateway addresses and static routes in order to forward the set of addresses allocated to metallb running inside the cluster over the new tunnel.

My hope is that it will resolve routing issues and so far on twodegrees I have been routed to my cloud server via auckland with 38ms of latency which for a game server is within my acceptable range.

Implementation all worked well thannks to the redhat guides on configuring network manager. The only issue I had was not ticking the automatically connect tickbox when creating the connections. Which lead to a couple of automatic patching related outages.

In future I’m considering nmstate-operator for the kube cluster to allow the configuration to be commited to my homelab code repository

Immutable Workers

status

I have four hosts that run my homelab stuff and I want fewer things to patch by switching from a traditional rpm based OS to rpm-ostree.

problems

centos8-stream went EOL July 2023 Over time config drifts between hosts with manual package selections base filesystem config could be automated saving system configuration as code

goals

  • new base os layer
  • understand update process
  • create unattended installer for server and workers
  • configure metallb from the start
  • configure safe kublet shutdown

resulting design

Fedora core os base layer (fcos39)

butane configuration file and ignition installer put into iso files for automatic provisioning

systemd scripts for installation of k3s and tailscale installed via file directives in butane

outcomes

seamless os upgrade process to fedora 40

users are consistient across the fleet using the same butane user keys

Overall happy with core os because it leverages existing rpm support for packages like k3s without needing a specialist OS