Publishing this blog to my local K3S Cluster with Gitea Actions
In my previous post I enabled Gitea Actions on the Gitea server I installed on my local K3S cluster. Now I can write CI/CD pipelines to automatically build and deploy code entirely locally. In How I built and deployed this blog I described the infrastructure and deployment pipeline of my blog. Deploying this blog to my local K3S cluster was a natural direction to go next.
Setting up the Repository and other Gitea Prerequisites
My blog is an existing project synced to a Github repository as remote origin, so I made a repository in Gitea as a second remote origin for use in Gitea Actions.
I added Gitea as a remote origin, then use git remote set-url to configure origin to push to both my local Gitea repository and the original Github repository.
# First add Gitea as a separate remote and push
git remote add gitea https://gitea.home.stuff/kckempf/Grokkist.git
git push gitea main
# Then configure origin to push to both GitHub and Gitea
git remote set-url --add --push origin https://github.com/kckempf/<my-git-remote>.git
# Use a fine-grained <token> with minimal scope, or SSH instead
git remote set-url --add --push origin https://kckempf:<token>@gitea.home.stuff/kckempf/Grokkist.git
In the Gitea UI, I added 3 secrets to the repository, navigating to Settings > Actions > Secrets.
REGISTRY_USERNAME- my username that I use to access my local container registryREGISTRY_PASSWORD- the password for my user that I use to access my local container registryKUBE_CONFIG- base64-encoded version of my cluster’s Kubernetes config, for configuring thekubectlcommand on the job container
To generate the KUBE_CONFIG value I used the following command:
cat ~/.kube/config | base64 | tr -d '\n'
Kubernetes Architecture
In order to publish my blog on my K3S cluster, I needed some way to serve the pages of the blog. In How I built and deployed this blog I described creating an S3 bucket to hold the generated files and how to serve those files with CloudFront as the CDN caching layer. Fundamentally this is a way of running an old school Web 1.0 site on the cloud, with the CDN providing a level of caching and security. All I needed, then, is a container that is capable of serving web pages.
To create this container on my K3S cluster, make it accessible as a service, and then make that service accessible outside of the cluster, I needed 4 things:
- A Namespace to organize all the pieces
- A Deployment to create a container in a Kubernetes pod
- A Service to enable communication to the pod
- An HTTPRoute to allow calls from outside of the cluster to the service
I used the yaml files below to create all of these parts. Because I want to use the Path Prefix /grokkist/ in my K3S deployment of the blog, I had to make 2 particular changes: adding the PathPrefix rule to my HTTPRoute and using a custom container image in my deployment. I go into more detail about the container image in my next section.
# namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: grokkist
---
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: grokkist
namespace: grokkist
spec:
replicas: 1
selector:
matchLabels:
app: grokkist
template:
metadata:
labels:
app: grokkist
spec:
imagePullSecrets:
- name: gitea-registry-secrets
containers:
- name: grokkist
image: registry.home.stuff/kckempf/grokkist:latest # custom image
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
---
# service.yaml
apiVersion: v1
kind: Service
metadata:
name: grokkist
namespace: grokkist
spec:
selector:
app: grokkist
ports:
- port: 80
targetPort: 80
---
# httproute.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
name: grokkist
namespace: grokkist
spec:
parentRefs:
- name: traefik-gateway
namespace: kube-system
rules:
- matches:
- path:
type: PathPrefix
value: /grokkist
backendRefs:
- name: grokkist
port: 80
Custom Grokkist Deployment Image
The nginx:alpine image is a minimal image for serving static files. However, defaulting to the /grokkist/ path prefix provided some complications, as by default paths like /grokkist/blog/ would get trimmed to /blog/. I applied the following conf file to a custom image based on nginx:alpine to solve that problem. The configuration:
- Listens on port 80
- Matches any request with path prefix
/grokkist/and serves the files at/usr/share/nginx/html/ - Redirects requests from
/grokkistto/grokkist/, in case it sees any paths without the terminal/.
server {
listen 80;
location /grokkist/ {
alias /usr/share/nginx/html/;
try_files $uri $uri/ =404;
}
location = /grokkist {
return 301 /grokkist/;
}
}
The following custom Docker image both applies the settings in the above conf file and copies the Astro-generated static site files from the local dist folder to the /usr/share/nginx/html folder on the container.
FROM nginx:alpine
COPY dist/ /usr/share/nginx/html
COPY nginx.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
Because I added /grokkist/ to the path, I had to update the navigation links between the blog posts. I added a BASE_URL to the paths, which defaults to /. For example, on the Header.astro page:
---
// ... other frontmatter
const base = import.meta.env.BASE_URL;
---
<header>
<nav>
<h2><a href="/">{SITE_TITLE}</a></h2>
<div class="internal-links">
<HeaderLink href={`${base}`}>Home</HeaderLink>
<HeaderLink href={`${base}blog`}>Blog</HeaderLink>
</div>
<SocialLinks />
</nav>
</header>
The BASE_URL environment variable is passed in during the build step, which is described later in the deployment pipeline. These changes were also applied to the other files that generate relative paths in hyperlinks.
Custom Job Container Image
Before moving on to the pipeline, I had one more problem to solve: getting the job container to trust the SSL certificate of my container registry. While the runner that watches for changes in my repository and kicks off builds already has my self signed certificate authority installed, the pipeline itself is run in a job container, which does not. It needs to be able to reach that registry to publish the latest version of my grokkist container on every run.
While I could add the cert to the container during the pipeline job, I could also create a custom container that already has that certificate installed. That way I don’t need to add the cert on every run. I made a custom container that did just that, installing kubectl at the same time to kill two birds with one stone.
FROM docker.gitea.com/runner-images:ubuntu-latest
# Install kubectl
RUN curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/arm64/kubectl" && \
chmod +x kubectl && \
mv kubectl /usr/local/bin/kubectl
# Trust local CA
COPY ca.crt /usr/local/share/ca-certificates/k3s-home-ca.crt
RUN mkdir -p /etc/docker/certs.d/registry.home.stuff && \
cp /usr/local/share/ca-certificates/k3s-home-ca.crt /etc/docker/certs.d/registry.home.stuff/ca.crt && \
update-ca-certificates
I published the custom container to my registry using docker push:
docker build -f Dockerfile.jobcontainer -t registry.home.stuff/kckempf/ubuntu-runner:latest .
docker push registry.home.stuff/kckempf/ubuntu-runner:latest
Deployment Pipeline
With all of the prerequisites out of the way, I could finally write my Gitea Actions pipeline.
- Run whenever a change is pushed to the
mainbranch - Run a single job called
build-and-deploy, on one of the custom containers, with the environment variablePUBLIC_BASE_PATHset to/grokkist/(this is mapped toBASE_URLin the Astro app) - Checkout the code
- Set up node
- Install dependencies using
npm - Build using
npm - Set up Docker Buildx to use the local registry with the self-signed certificate
- Log in to the local registry
- Build and publish the custom Docker image with the static site content. The version is unique for every run in case I need to roll back
- Create an image pull secret to store the registry credentials that kubernetes will use to access the registry
- Create or update the Kubernetes resources on the cluster by applying the Kubernetes manifests
- Deploy the change to Kubernetes
name: Build and Deploy
on:
push:
branches:
- main
jobs:
build-and-deploy:
runs-on: ubuntu-latest
container:
image: registry.home.stuff/kckempf/ubuntu-runner:latest
env:
PUBLIC_BASE_PATH: /grokkist/
steps:
- name: Checkout
uses: actions/checkout@v4
- name: Set up Node
uses: actions/setup-node@v4
with:
node-version: 20
- name: Install dependencies
run: npm ci
- name: Build
run: npm run build
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
with:
buildkitd-config-inline: |
[registry."registry.home.stuff"]
ca=["/etc/docker/certs.d/registry.home.stuff/ca.crt"]
- name: Log in to registry
uses: docker/login-action@v3
with:
registry: registry.home.stuff
username: ${{ secrets.REGISTRY_USERNAME }}
password: ${{ secrets.REGISTRY_PASSWORD }}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
push: true
tags: registry.home.stuff/kckempf/grokkist:${{ gitea.sha }}
provenance: false
sbom: false
- name: Set up kubectl
run: |
mkdir -p ~/.kube
echo "${{ secrets.KUBE_CONFIG }}" | base64 -di > ~/.kube/config
chmod 600 ~/.kube/config
- name: Create image pull secret
run: |
kubectl create secret docker-registry gitea-registry-secrets --docker-server=registry.home.stuff --docker-username=${{ secrets.REGISTRY_USERNAME }} --docker-password=${{ secrets.REGISTRY_PASSWORD }} --namespace=grokkist --dry-run=client -o yaml | kubectl apply -f -
- name: Apply Kubernetes manifests
run: kubectl apply -f k8s/
- name: Deploy to Kubernetes
run: kubectl set image deployment/grokkist grokkist=registry.home.stuff/kckempf/grokkist:${{ gitea.sha }} -n grokkist
Conclusion
I came into my Kubernetes projects with some goals in mind: setting up my own infrastructure and deploying on it, running my own version control, and deploying my code to my own infrastructure using Continuous Delivery. I knew a lot less about how to do it, and I’ve enjoyed learning more about the bits under the surface.