estradiol.cloud/content/posts/hugo-on-k8s-nginx/index.md
2024-03-12 22:33:12 -07:00

542 lines
18 KiB
Markdown

+++
title = 'Hugo on Kubernetes & NGINX'
date = 2024-03-12T16:45:59-07:00
description = 'kubernetes is for girls'
series = ['wtf']
categories = ['Tutorial']
tags = ['meta', 'k8s', 'flux', 'hugo']
toc = true
+++
i decided to make a website. a static one. this one. with [Hugo][hugo]. the
main reason i have for needing a website is as a vanity project, so i have some
stuff to host in a [Kubernetes][k8s] cluster i'm running. the k8s cluster is
also a vanity project.
because i don't like software, i wanted a way to deploy my site that doesn't
involve much of it. this post is about that.
## Getting Started
i built my site by following the straight-forward
_[Getting Started][hugo-started]_ guide in the Hugo documentation.
i did `hugo new site estradiol.cloud`. and then `cd estradiol.cloud; git init`.
and then i picked a ridiculous theme
["inspired by terminal ricing aesthetics"][risotto], installing it like `git
submodule add https://github.com/joeroe/risotto.git themes/risotto; echo "theme
= 'risotto'" >> hugo.toml`.[^1]
[^1]: i appreciate the culinary branding.
at this point, my website is basically finished (i also changed the title in
`hugo.toml`). i probably won't be putting anything on it, so there's no point
fiddling with other details.
about deployment, the Hugo guide's _[Basic Usage][hugo-deploy]_ page has this
to offer:
> Most of our users deploy their sites using a CI/CD workflow, where a push{{< sup "1" >}}
> to their GitHub or GitLab repository triggers a build and deployment. Popular
> providers include AWS Amplify, CloudCannon, Cloudflare Pages, GitHub Pages,
> GitLab Pages, and Netlify.
>
> 1. The Git repository contains the entire project directory, typically excluding the
> public directory because the site is built _after_ the push.
importantly, you can't make a post about deploying this way. _everyone_ deploys
this way. if _i_ deploy this way, this site will have no content.
this approach also involves a build system somewhere that can run Hugo to
compile the code and assets and push them onto my host. i definitely
already need Hugo installed on my laptop if i'm going to post anything.[^2]
so now i'm running Hugo in two places. there's surely going to be other
complex nonsense like webhooks involved.
[^2]: unlikely.
![diagram: deploy w/ GitHub Pages & Actions](images/hugo-github-pages.svg)
----
and hang on. let's look at this again:
> 1. The Git repository contains the entire project directory, typically excluding the
> public directory because the site is built _after_ the push.
you're telling me i'm going to build a nice static site and not check the
_actual content_ into version control? couldn't be me.
## Getting Static
suppose i instead check my content in exactly as i intend to serve it?
then i could shell into my server box, pull the site, and _nifty-galifty!_ isn't
this the way it has [always been done][worm-love]?
my problem is that i don't have a server box. i have a _container orchestration
system_. there are several upsides to this[^3] but it means that _somehow_ my
generated content needs to end up in a container. because [Pods][k8s-pods] are
ephemeral and i'd like to run my site with horizontal scalability[^4], i don't
want my container to retain runtime state across restarts or replicas.
[^3]: few of which could be considered relevant for my project.
[^4]: i absolutely will not need this
i _could_ run a little pipeline that builds a container image wrapping my
content and pushes it to a registry. when i deploy, the cluster pulls the
image, content and all. all ready to go. but now i've got _software_ again:
build stages and webhooks and, to make matters worse, now i'm hosting
and versioning container images.
![diagram: deploy w/ container build](images/hugo-container-build.svg)
i don't want any of this. i just want to put some HTML and static assets behind a
web server.
---
instead, i'd like to deploy a popular container image from a public registry
and deliver my content to it continuously.
a minimal setup to achieve this might look like:
- a `Pod` with:
- an `nginx` container to serve the content;
- a `git-pull` sidecar that loops, pulling the content;
- an `initContainer` to do the initial checkout;
- an `emptyDir` volume to share between the containers.
- a `ConfigMap` to store the nginx config.
![diagram: minimal pod/configmap setup](images/hugo-minimal-pod-setup.svg)
when a new `Pod` comes up, the `initContainer` mounts the
[`emptyDir`][k8s-emptydir] at `/www` and clones the repository into it. i use
`git sparse-checkout` to avoid pulling repository contents i don't want to serve
out:
```bash
# git-clone command
git clone https://code.estradiol.cloud/tamsin/estradiol.cloud.git --no-checkout --branch trunk /tmp/www;
cd /tmp/www;
git sparse-checkout init --cone;
git sparse-checkout set public;
git checkout;
shopt -s dotglob
mv /tmp/www/* /www
```
for the sidecar, i script up a `git pull` loop:
```bash
# git-pull command
while true; do
cd /www && git -c safe.directory=/www pull origin trunk
sleep 60
done
```
and i create a [ConfigMap][k8s-configmap] with a server block to configure
`nginx` to use Hugo's `public/` as root:
```txt
# ConfigMap; data: default.conf
server {
listen 80;
location / {
root /www/public;
index index.html;
}
}
```
the rest of this is pretty much boilerplate:
{{< code-details summary="`kubectl apply -f https://estradiol.cloud/posts/hugo-on-k8s/site.yaml`" lang="yaml" details=`
# estradiol-cloud.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
name: nginx-server-block
data:
default.conf: |-
server {
listen 80;
location / {
root /www/public;
index index.html;
}
}
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
spec:
containers:
- name: nginx
image: nginx:1.25.4
ports:
- containerPort: 80
volumeMounts:
- mountPath: /www
name: www
- mountPath: /etc/nginx/conf.d
name: nginx-server-block
- name: git-pull
image: bitnami/git
command:
- /bin/bash
- -ec
- |
while true; do
cd /www && git -c safe.directory=/www pull origin trunk
sleep 60
done
volumeMounts:
- mountPath: /www
name: www
initContainers:
- name: git-clone
image: bitnami/git
command:
- /bin/bash
- -c
- |
shopt -s dotglob
git clone https://code.estradiol.cloud/tamsin/estradiol.cloud.git --no-checkout --branch trunk /tmp/www;
cd /tmp/www;
git sparse-checkout init --cone;
git sparse-checkout set public;
git checkout;
mv /tmp/www/* /www
volumeMounts:
- mountPath: /www
name: www
volumes:
- name: www
emptyDir: {}
- name: nginx-server-block
configMap:
name: nginx-server-block
` >}}
---
my Hugo workflow now looks like:
1. make changes to source;
1. run `hugo --gc --minify`;[^7]
1. `git` commit & push.
my `git pull` control loop takes things over from here and i'm on easy street.
[^7]: i added `disableHTML = true` and `disableXML = true` to `[minify]`
configuration in `hugo.toml` to keep HTML and RSS diffs readable.
## Getting Web
this is going great! my `Pod` is running. it's serving out my code. i get
Continuous Deployment for the low price of 11 lines `bash`. i mean...
no one can actually browse to my website[^8] but that will be an easy fix,
right? yes. networking is always the easy part.
[^8]: i can check that its working, at least, with a [port-forward][k8s-port].
first, i need a [`Service`][k8s-svc]. this gives me a proxy to my several
replicas[^11] and in-cluster service discovery.
[^11]: lmao
{{< code-details summary="`kubectl apply -f https://estradiol.cloud/posts/hugo-on-k8s-nginx/service.yaml`" lang="yaml" details=`
# service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
name: nginx
spec:
type: ClusterIP
selector:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
` >}}
next, i need an [`Ingress`][k8s-ingress] to handle traffic inbound to the cluster
and direct it to the `Service`:
{{< code-details summary="`kubectl apply -f https://estradiol.cloud/posts/hugo-on-k8s-nginx/ingress.yaml`" lang="yaml" details=`
# ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
labels:
app.kubernetes.io/instance: estradiol-cloud
app.kubernetes.io/name: nginx
name: nginx
spec:
rules:
- host: estradiol.cloud
http:
paths:
- backend:
service:
name: nginx
port:
name: http
path: /
pathType: Prefix
` >}}
this part expresses a routing rule: traffic reaching the cluster via
`estradiol.cloud` should go to my `Service`, and then to one of its backend `Pod`s.
to actually apply this rule, i need an ingress controller. mine is
[ingress-nginx][nginx-ingress].
when i deployed controller in my cluster, it created _some more_ `nginx` `Pod`s.
these update their configuration dynamically based on the rules
in my `Ingress` resource(s). the controller also creates a `Service` of
type `LoadBalancer`, which [magically][do-lb] creates a load balancer appliance
in my cloud provider. off-screen, i can point DNS to *that* appliance to finish
the setup.
[![diagram: kubernetes ingress](https://kubernetes.io/docs/images/ingress.svg)][k8s-ingress-wtf]
you can tell it's working by looking at your browser bar.
---
as this has come together, i've gotten increasingly anxious about how much
YAML i've had to write. this is a problem because YAML is software and, as
established, i'm hoping not to have much of that. it's also annoying that most
of this YAML really is just boilerplate.
conveniently, [Bitnami][bitnami] maintains a [Helm][helm] Chart that templates
out all the boilerplate and does exactly what i've just been doing.[^9] i can
replace all my YAML with a call out to this chart and a few lines of
configuration, assuming i have [helm client installed][helm-install]:
[^9]: what incredible luck! (obviously, until now i've been working backward from this chart)
{{< code-details summary="`helm upgrade --install --create-namespace --namespace estradiol-cloud -f https://estradiol.cloud/posts/hugo-on-k8s-nginx/values.yaml oci://registry-1.docker.io/bitnamicharts/nginx`" lang="yaml" details=`
# values.yaml
cloneStaticSiteFromGit:
enabled: true
repository: "https://code.estradiol.cloud/tamsin/estradiol.cloud.git"
branch: trunk
gitClone:
command:
- /bin/bash
- -ec
- |
[[ -f "/opt/bitnami/scripts/git/entrypoint.sh" ]] && source "/opt/bitnami/scripts/git/entrypoint.sh"
git clone {{ .Values.cloneStaticSiteFromGit.repository }} --no-checkout --branch {{ .Values.cloneStaticSiteFromGit.branch }} /tmp/app
[[ "$?" -eq 0 ]] && cd /tmp/app && git sparse-checkout init --cone && git sparse-checkout set public && git checkout && shopt -s dotglob && rm -rf /app/* && mv /tmp/app/* /app/
ingress:
enabled: true
hostname: estradiol.cloud
ingressClassName: nginx
tls: true
annotations: {
cert-manager.io/cluster-issuer: letsencrypt-prod
}
serverBlock: |-
server {
listen 8080;
root /app/public;
index index.html;
}
service:
type: ClusterIP
`>}}
![diagram: helm setup](images/hugo-helm-setup.svg)
configuration for the `git-clone` script and our custom server block are added
via `values.yaml`. the `git-pull` loop configured by the chart works as-is.
by using the chart, we get a few other nicities. for instance,
my `Pod`s are now managed by a [`Deployment`][k8s-deployment].[^44] this will make my
grand scale-out plans a breeze.
[^44]: i also snuck a TLS certificate configuration via Let's Encrypt with
[`cert-manager`][cert-mgr] into this iteration. if you're following along at home and don't have `cert-manager` installed, this should still work fine (but with no HTTPS).
## Getting Flux'd
by now, i'm riding high. my whole setup is my static site code and <30 lines of
YAML.
i *do* have a bunch of stuff deployed into my cluster, and none of this is very
reproducible without all of that. my workflow has also expanded to:
1. for routine site deploys:
1. make changes to source;
1. run `hugo --gc --minify`;[^7]
1. `git` commit & push.
1. to update `nginx`, the chart version, or change config:
1. make changes to `values.yaml`
1. `helm upgrade`
i could do without the extra `helm` client dependency on my laptop. i'm also
pretty `git push`-pilled, and i really want the solution to all my problems
to take the now familiar shape: put a control loop in my cluster and push
to a `git` repository.
enter [`flux`][fluxcd].
with `flux`, i decide on a repository (and maybe a path within it) to act as
a source for my Kubernetes YAML. i go through a short [bootstrap][fluxcd-boot]
process which installs the `flux` controllers and add them to repository. to
make a change to a resource in my cluster, i edit the YAML and push to the
repository. `flux` listens and applies the changes.
`flux` supports Helm deploys, so i can get that `helm` client off my laptop.
i can also use it to manage my ingress controller, `cert-manager`, `flux`
itself and whatever other infrastructural junk i may end up needing.
to move my web stack into `flux`, i create a `HelmRepository` resource for
the `bitnami` Helm charts:
```yaml
# bitnami-helm.yaml
apiVersion: source.toolkit.fluxcd.io/v1beta2
kind: HelmRepository
metadata:
name: bitnami
namespace: default
spec:
url: https://charts.bitnami.com/bitnami
```
and add a `HelmRelease` pointing to the repository/chart version and containing
my `values.yaml`:
{{< code-details summary="`release.yaml`" lang="yaml" details=`
apiVersion: helm.toolkit.fluxcd.io/v2beta1
kind: HelmRelease
metadata:
name: web
namespace: estradiol-cloud
spec:
interval: 5m
chart:
spec:
chart: nginx
version: '15.12.2'
sourceRef:
kind: HelmRepository
name: bitnami
namespace: default
interval: 1m
values:
cloneStaticSiteFromGit:
enabled: true
repository: "https://code.estradiol.cloud/tamsin/estradiol.cloud.git"
branch: trunk
gitClone:
command:
- /bin/bash
- -ec
- |
[[ -f "/opt/bitnami/scripts/git/entrypoint.sh" ]] && source "/opt/bitnami/scripts/git/entrypoint.sh"
git clone {{ .Values.cloneStaticSiteFromGit.repository }} --no-checkout --branch {{ .Values.cloneStaticSiteFromGit.branch }} /tmp/app
[[ "$?" -eq 0 ]] && cd /tmp/app && git sparse-checkout init --cone && git sparse-checkout set public && git checkout && shopt -s dotglob && rm -rf /app/* && mv /tmp/app/* /app/
ingress:
enabled: true
hostname: estradiol.cloud
ingressClassName: nginx
tls: true
annotations: {
cert-manager.io/cluster-issuer: letsencrypt-prod
}
serverBlock: |-
server {
listen 8080;
root /app/public;
index index.html;
}
service:
type: ClusterIP
`>}}
when i push these to my `flux` [source repository][sublingual-ec], the Helm
release rolls out.
![diagram: flux git push/deploy sequence](images/flux-seq.svg)
## A Note About Software
in the end, i'm forced to admit there's still a lot of software involved in all
of this. setting aside the stuff that provisions and scales my cluster nodes,
and the _magic_ `LoadBalancer`, i have:
- `nginx` (running from a stock image);
- `git` & `bash` (running from a stock image);
- a remote git server (i'm running `gitea`[^99], but github dot com is fine here);
- Kubernetes (oops!);
- `flux`, especially `kustomize-controller` and `helm-controller`;
- `ingress-nginx` controller;
- `cert-manager` and Let's Encrypt;
- the `bitnami/nginx` Helm chart;
[^99]: because i'm running `gitea` in my cluster and i want to avoid a circular
dependency for my `flux` source repository, i also depend on GitLab dot com.
the bulk of this i'll be able to reuse for the other things i deploy on the
cluster[^80]. and it replaces SASS black-boxes like "AWS Amplify, CloudCannon,
Cloudflare Pages, GitHub Pages, GitLab Pages, and Netlify" in the recommended
Hugo deployment.
to actually deploy my site, i get to maintain a `bash` scripts for `git-clone`, my
NGINX config, and a couple of blobs of YAML.
[^80]: i won't.
at least there are no webhooks.
---
_fin_
[bitnami]: https://bitnami.com/
[cert-mgr]: https://cert-manager.io/docs/tutorials/acme/nginx-ingress/
[do-lb]: https://docs.digitalocean.com/products/kubernetes/how-to/add-load-balancers/
[fluxcd]: https://fluxcd.io/
[fluxcd-boot]: https://fluxcd.io/flux/installation/bootstrap/
[helm]: https://helm.sh
[helm-install]: https://helm.sh/docs/intro/install/
[hugo]: https://gohugo.io
[hugo-deploy]: https://gohugo.io/getting-started/usage/#deploy-your-site
[hugo-started]: https://gohugo.io/getting-started
[k8s]: https://kubernetes.io
[k8s-configmap]: https://kubernetes.io/docs/concepts/configuration/configmap/
[k8s-deployment]: https://kubernetes.io/docs/concepts/workloads/controllers/deployment/
[k8s-emptydir]: https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
[k8s-init]: https://kubernetes.io/docs/concepts/workloads/pods/init-containers/
[k8s-ingress]: https://kubernetes.io/docs/concepts/services-networking/ingress/
[k8s-ingress-wtf]: https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
[k8s-pods]: https://kubernetes.io/docs/concepts/workloads/pods/
[k8s-port]: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/
[k8s-pv]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
[k8s-svc]: https://kubernetes.io/docs/concepts/services-networking/service/
[nginx-ingress]: https://kubernetes.github.io/ingress-nginx/
[risotto]: https://github.com/joeroe/risotto
[sublingual-ec]: https://gitlab.com/no_reply/sublingual/-/tree/trunk/estradiol.cloud
[worm-love]: https://www.mikecurato.com/worm-loves-worm