Adventures in Code

Wireguard

Implementing wireguard manually

To allow more fine grained control of my network I have decided to implement wireguard as a site to site VPN.

Prior to this I was using tailscale to advertise routes from catalyst cloud into my homelab LAN. The issue I experienced was the traffic going all the way to sydney to Tailscales relay server. Part of the featureset for tailscale is to perform NAT punchthrough and that requires a public endpoint to provide coordination.

Most companies host in sydney because it has cloud regions for the big three vendors (google, microsoft, amazon) but that increases the charges for my hosting because international bandwidth is more expensive than national bandwidth.

To remove the relay server problem I decided to try building a wireguard tunnel with networkmanager

Factors that allowed me to implement wireguard

  • Both of my machines are running Fedora CoreOS with network manager
  • the existing subnet router is responsible for the tunnel
  • I have a public static ip which I can use to allowlist on the catalyst cloud security group
  • port forwarding rules translate my public address to the internal address of the subnet router
  • the main node is the prodesk G2 hosting both the kubernetes api server and wireguard tunnel

I’ve added a specific network to transit between catalyst and home 10.9.9.0/24. That allows me to assign gateway addresses and static routes in order to forward the set of addresses allocated to metallb running inside the cluster over the new tunnel.

My hope is that it will resolve routing issues and so far on twodegrees I have been routed to my cloud server via auckland with 38ms of latency which for a game server is within my acceptable range.

Implementation all worked well thannks to the redhat guides on configuring network manager. The only issue I had was not ticking the automatically connect tickbox when creating the connections. Which lead to a couple of automatic patching related outages.

In future I’m considering nmstate-operator for the kube cluster to allow the configuration to be commited to my homelab code repository

Immutable Workers

status

I have four hosts that run my homelab stuff and I want fewer things to patch by switching from a traditional rpm based OS to rpm-ostree.

problems

centos8-stream went EOL July 2023 Over time config drifts between hosts with manual package selections base filesystem config could be automated saving system configuration as code

goals

  • new base os layer
  • understand update process
  • create unattended installer for server and workers
  • configure metallb from the start
  • configure safe kublet shutdown

resulting design

Fedora core os base layer (fcos39)

butane configuration file and ignition installer put into iso files for automatic provisioning

systemd scripts for installation of k3s and tailscale installed via file directives in butane

outcomes

seamless os upgrade process to fedora 40

users are consistient across the fleet using the same butane user keys

Overall happy with core os because it leverages existing rpm support for packages like k3s without needing a specialist OS

Homelab Diagrams

Neato diagram

Here’s a diagram of my homelab setup 16 cores and 64GB of RAM on the cloud would be ~$500/month in ap-southeast-2 I’m only expecting local traffic so I’m gonna give catalyst cloud a go for ingress

C a 1 1 1 t v G 0 a C B G l P B y U R s A t M G w e d g e r o u t e r x K t 6 1 2 2 u t 4 1 1 2 h C 6 4 4 b h C 6 6 4 i P G 0 0 e i P G 0 0 n U B G G r n U B G G k B B n k B B c e c e s n t e s s n a v e n a a t t m s t t t r a e r a a e n e s s o s s m s s d m s s 7 d d e 7 d d 2 s 0 0 0 m q ( . k 2 3 s ) p 4 1 1 2 N 2 1 2 2 r C 6 2 4 U C 6 4 4 o P G 0 0 C P G 0 0 d U B G G U B G G e B B i B B s R 5 R k A s n A s n M a v M a v g t m t m 4 a e a e 0 0 s s s s s s s s d d d d n 5 2 e 0 T t 0 B g e G S a B A r T s A 2 a 1 t H 2 a D D N s A s t S d i e t r i 1 e r 0

Reasoning

  1. TINY PUTERS
  2. 16GB of ram is really expensive in the cloud and this hardware is being discarded frequently
  3. Each tiny pc comes with an SSD

catalyst edge node provides a secure public ip and attaches to the tailnet to get access to kubernetes resources

ACLs would be good.