[Deploying a basic React site — Part 1] Setting up a Kubernetes cluster hosted on a home server, protected by Cloudflare

jpeg's files
8 min readSep 8, 2024

--

I have a simple use case. I want to split receipts between my friends and I when we go out to eat and a single person pays, accounting for various things like tip, tax, foreign currencies, and partial cash payments by various attendees.

What’s the smart way to do this? Probably just keep splitting these things up in a spreadsheet. It only takes ~10 minutes and it’s effective, reliable, and trustworthy! But, as a software engineer, it’s my duty to spend hundreds of hours on things that can be done by hand in a few minutes.

Okay, so I want to deploy a website. I know React, and it’s simple to develop a site that does all this for me, so I can just throw the static files into an S3 bucket and call it a day.

But of course, that’s not nearly overcomplicated enough to satisfy my SWE lizard brain. So logically I decided that the only sane thing to do was to self-host a Kubernetes cluster on my home server with my own SSO provider for administrative functions, automated software deployment with ArgoCD, a self-hosted artifact repository for container images and helm charts, and full observability into the behavior of my cluster and site with Grafana and Loki.

So, follow along as I overengineer the shit out of something menial.

“If you wish to make an apple pie from scratch, you must first invent the universe.” ― Carl Sagan, Cosmos

Is it even a tech article in 2024 if I don’t put an AI generated image in here?

Setting up a server

We could of course use a cloud provider’s Kubernetes distribution, or even just set up a virtual server on one of those clouds, but where’s the fun in that? To deploy this React app, we’re going to start from the ground up with our own hardware.

After feeling particularly inspired by the TinyMiniMicro Home Lab Revolution, I ended up purchasing a refurbished HP Z2 Mini G3 workstation off of Ebay for about $180. For that, I got 32GB of RAM, 1TB of SSD storage, and a Xeon E3–1245 processor. It’s more than sufficient for my needs and looks nice in my living room. After a few days of waiting, it arrived and I was ready to start my own janky cloud.

I started off by burning an Ubuntu Server iso onto a spare USB drive and flashing it onto the server. I plugged the server directly into my router, booted the machine up, and set up my root user.

Setting up port forwarding

In order to be able to forward requests to my deployment, I needed to be able to route network traffic from the public internet to my new server. I have Quantum Fiber and an old CenturyLink C4000XG router that was provided for free when I signed up for my internet. It’s certainly not enterprise-ready networking equipment, but it technically can get the job done. After fighting with the router software and management UI for multiple hours (and performing about 5 factory resets), I was eventually able to set up port forwarding to my server on port 443.

While this configuration looks simple, it took some unknown combo of maneuvers to get it to actually forward the traffic correctly

Additionally, I ensured that traffic on port 443 was allowed through the router firewall, which for some strange reason only blocks inbound traffic for Windows Messaging and Windows Service in the default configuration.

The router automatically enables DHCP reservation for devices on the network, so I don’t have to worry about the IP address of my server changing on my internal network.

Okay great, so if I can get traffic to my router, it should now forward it directly to my server on port 443.

Setting up DDNS

Because I don’t have a static IP for my router, it can change at any time, meaning any domain name we have pointing directly at our IP would no longer work. To fix this, we can set up DDNS (Dynamic DNS) in the router settings. I’m using the free service No-IP (though the free tier requires manual renewal of the configuration once a month), but there are various ways you can achieve this in a more resilient manner, either by paying for a service or setting up a script somewhere that updates the IPs directly with the DNS provider (for example Cloudflare has some easy-to-use APIs for this, which I’ll explore in a later article where I explore upping the availability, security, and robustness of my setup).

Setting up a domain name

I’m of course going to eventually want to expose my React site at a domain that I own, but I additionally wanted to be able to access the administrative applications of my deployment remotely, as well as do things like receive webhooks from GitHub to my ArgoCD instance. To achieve both of these, I first needed a domain name, which you can get on Squarespace domains (or your favorite domain name service) for pretty cheap. I purchased a 5 year lease on a domain from Squarespace Domains for $100.

Setting up Cloudflare

After purchasing the domain, I wanted to make sure that everything was secured by Cloudflare, which handles DDoS protection, filtering malicious traffic, bot prevention, and a few other nice security functions.

In Cloudflare, add a new site. The free tier is more than sufficient for my use case, so I chose that and continued to update my DNS nameservers. Cloudflare provides simple instructions for this process in the setup flow, and the steps generally included:

  1. Logging into Squarespace
  2. Turning off DNSSEC
  3. Adding the Cloudflare namespace servers (in my case, these were annalise.ns.cloudflare.com and sam.ns.cloudflare.com)

Lastly, we need to then configure our DNS records to point from the domain we just purchased to the DDNS domain configured above. This way, the main domain always points to the static DDNS domain, and that domain points directly to whatever the IP of the router is at that time. We do this by adding a CNAME record for the root domain that points to our DDNS address. These redirects are invisible to the end user.

Configuring HTTPS

In order to encrypt all the traffic between our server and Cloudflare, we’ll tweak a couple settings. This involves two steps:

  1. Create an origin certificate in Cloudflare for the server and save both the private and public key to the server in a place that can be accessed later
  2. Change the site’s SSL/TLS settings to “Strict” which ensures that Cloudflare will only ever serve traffic to the origin server if it presents the correct certificates

Setting up a firewall on the server

While we can now be sure that the traffic coming from Cloudflare is encrypted and our public domain name is protected against DDoS and other attacks, someone could still directly hit our server’s IP (or DDNS address) and bypass Cloudlfare entirely. To prevent this, we can leverage Ubuntu’s built in firewall UFW (Uncomplicated Firewall) to only allow traffic from Cloudflare’s public IP list on port 443. Cloudflare makes this IP list publicly available via their API, and so we can perform a simple curl request to get the list and dynamically apply it to our firewall.

# Ensure that firewall is enabled
sudo ufw enable

# Fetch the Cloudflare IPs
response=$(curl --silent --request GET \
--url https://api.cloudflare.com/client/v4/ips \
--header 'Content-Type: application/json')

# Extract the IPv4 CIDRs using jq
ipv4_cidrs=$(echo $response | jq -r '.result.ipv4_cidrs[]')

# Loop through each IPv4 CIDR and allow only HTTPS traffic (443) from those IPs
for cidr in $ipv4_cidrs; do
sudo ufw allow from $cidr to any port 443 proto tcp
done

# Reload UFW to apply changes
sudo ufw reload

Note that these IPs are liable to change at any time, so this should likely be a script that runs periodically.

Additionally, I want to be able to SSH into this machine from my development machine, so I add the following rule (with $DEV_IP set to my laptop’s local network IP address)

sudo ufw allow from $DEV_IP to any port 22 proto tcp

To secure this setup, I’ve additionally disabled password authentication with my configuration in /etc/ssh/sshd_config and ensured that only authentication with a trusted certificate is allowed.

Setting up K3s

Now that traffic should be securely and correctly forwarded to our server, we need a place for it to go. In our case, that is a kubernetes cluster which is hosting our apps.

For the Kubernetes distribution, I’ve chosen K3s, which is lightweight and ridiculously easy to set up.

curl -sfL https://get.k3s.io | sh -

And that’s it. It additionally automatically sets up kubectl for interacting with the cluster.

Deploying a sample application

To make sure everything is configured correctly, we’ll be deploying a simple application that just serves the string “Hello, world!” at our domain.

First, we create a namespace, deployment, and service to host our application. We’ll be deploying HashiCorp’s http-echo service, which does exactly what it sounds like: echos whatever content it was started with over http.

# helloworld.yml
apiVersion: v1
kind: Namespace
metadata:
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: hashicorp/http-echo:latest
args:
- "-text=Hello, World!"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: hello-world-service
namespace: hello-world
spec:
selector:
app: hello-world
ports:
- protocol: TCP
port: 80
targetPort: 5678
sudo kubectl apply -f helloworld.yml

Now, we will have to configure ingress to allow external clients to connect to the hello-world-service. Since we want to only allow HTTPS traffic, we’ll first add our certificate and key that we obtained from Cloudflare as a secret that our ingress controller can access.

sudo kubectl create secret tls hello-world-tls \
--cert=/etc/ssl/certs/yourcert.pem \
--key=/etc/ssl/certs/yourkey.key \
-n hello-world

Next, we can define the actual ingress configuration. We are utilizing Traefik, which ships with K3s.

# ingress.yml
apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
name: hello-world-ingress
namespace: hello-world
spec:
entryPoints:
- websecure
tls:
secretName: hello-world-tls
domains:
- main: your.domain
sans:
- "*.your.domain"
routes:
- match: Host(`your.domain`)
kind: Rule
services:
- name: hello-world-service
port: 80
sudo kubectl apply -f ingress.yml

Now navigate to your domain, and if everything is configured correctly, you should see the following:

Our React app’s humble beginnings

Great! Looks like everything is working. Let’s clean up our cluster so we can move on to more serious business.

sudo kubectl delete namespace hello-world 

Coming soon: [Deploying a basic React site — Part 2] Setting up self-hosted ArgoCD with Keycloak SSO on Kubernetes

--

--