Setting up Gitea (GitHub-like) Actions on my K3S Cluster


In my previous post I set up a local git server on my K3S cluster using Gitea. While it’s nice to have a local Git server, my real goal is to set up continuous delivery on my local projects. Gitea supports pipelines using Gitea Actions, which has near feature parity with GitHub Actions. Getting Gitea Actions up and running took a bit more work than I expected.

Preparing Gitea to run actions

Enable Actions in Gitea configuration

Gitea does not have Actions enabled by default. Since I have Gitea installed in my K3S cluster using a Helm chart, I updated my Helm chart and re-applied it.

gitea:
  config:
    actions:
      ENABLED: true

# Database pod configuration
postgresql-ha:
  postgresql:
    livenessProbe:
      initialDelaySeconds: 300
      timeoutSeconds: 10
      failureThreshold: 6
    readinessProbe:
      initialDelaySeconds: 300
      timeoutSeconds: 10
      failureThreshold: 6

Since Helm charts are declarative and the application of Helm charts is idempotent, I added the new configuration to the existing configuration. When the upgrade is applied, the database configuration that has already been applied won’t be re-applied. This is convenient, as I can keep a record of my configuration changes and my current state in source control.

I applied the new values using helm upgrade.

helm upgrade gitea gitea-charts/gitea -f values.yaml -n <namespace>

Install Docker Engine prereq for Gitea runner

Gitea uses an act runner to run the actions, independent of the other parts of Gitea. act is an open source tool that can be used to run GitHub Actions locally. Some of the actions in GitHub Actions (and therefore Gitea Actions) involve pulling or building container images, which act does by communicating with a Docker daemon via the Docker socket. My install of K3S uses the default container runtime for K3S, containerd, so I needed to install Docker Engine on my nodes before setting up my runner.

This does mean that my nodes will have both containerd (used by K3S) and Docker Engine (used by the act runner) running concurrently, but this should not be a problem with these particular Pis, as they have 8GB of RAM. With a smaller amount of RAM I probably would put in the work to switch my K3S install to use Docker Engine for its own container runtime, but it’s not necessary in this case.

In my previous post I described managing the configuration of my nodes using Ansible, so I updated the playbook:

---
- name: Configure k3s cluster nodes
  hosts: k3s_cluster
  become: true

  vars:
    nfs_server: "192.168.50.X"
    nfs_mount_path: "/mnt/tank/k3s"

  tasks:
    - name: Update apt cache
      apt:
        update_cache: true
        cache_valid_time: 3600
    
    - name: Install required packages
      apt:
        name:
          - nfs-common
          - open-iscsi
          - curl
          - git
        state: present

    - name: Enable and start rpcbind
      systemd:
        name: rpcbind
        enabled: true
        state: started
    
    - name: Test NFS mount is reachable
      command: "showmount -e {{ nfs_server }}"
      register: nfs_exports
      changed_when: false
      failed_when: false
    
    - name: Show NFS exports from TrueNAS
      debug:
        var: nfs_exports.stdout_lines

# New Docker install instructions
    - name: Download Docker install script
      get_url:
        url: https://get.docker.com
        dest: /tmp/get-docker.sh
        mode: '0755'
      
    - name: Install Docker
      command: /tmp/get-docker.sh
      args:
        creates: /usr/bin/docker
    
    - name: Enable and start Docker
      systemd:
        name: docker
        enabled: true
        state: started
    
    - name: Add user to docker group
      user:
        name: "{{ ansible_user }}"
        groups: docker
        append: true
    
    - name: Configure Docker daemon
      copy:
        dest: /etc/docker/daemon.json
        content: |
          {
            "log-driver": "json-file",
            "log-opts": {
              "max-size": "10m",
              "max-file": "3"
            }
          }
      notify: restart docker
    
  handlers:
    - name: restart docker
      systemd:
        name: docker
        state: restarted

Similarly to the Helm values file, this playbook is idempotent and declarative. Ansible will run all of the tasks, but because tasks like the Docker install use creates: /usr/bin/docker, they are skipped when the target state already exists. The net effect is that only the apt cache update and the Docker Engine installation steps will do any real work.

ansible-playbook -i inventory.ini setup-nodes.yml

Enabling the Gitea built-in Container Registry

Kubernetes always pulls containers from a registry, so if I want to have the Gitea actions pipelines to deploy applications to my K3S cluster, I needed a container registry. Gitea actually has a built-in container registry, so I can use the local registry on my Gitea server. But for the Docker Engine on my nodes to have access to this registry I need to do some work I skipped before: setting up a local domain name and enabling SSL.

Enabling SSL for the gitea registry

Docker also requires HTTPS for accessing the container registry. Though there are docker settings that should allow you to access a registry without SSL by adding it to a list of Insecure Registries, I wasn’t able to get it to work. While I’m sure I could get there, this is a good example of software nudging me toward making the better choice: enabling SSL for my registry.

Since I’m only using my K3S cluster internally, I can use a self-signed certificate rather than getting a root certificate from a certificate authority. I did this on my host computer using step:

step certificate create "K3s Home CA" ca.crt ca.key --profile root-ca --no-password --insecure

To enable SSL with the services on my K3S cluster, I installed cert-manager using Helm:

helm repo add jetstack https://charts.jetstack.io
helm repo update

helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set crds.enabled=true

I stored the root certificate as a Kubernetes secret:

kubectl create secret tls k3s-ca-secret --cert=$HOME/k3s-ca/ca.crt --key=$HOME/k3s-ca/ca.key -n cert-manager

I created a ClusterIssuer using that Certificate Authority and stored the config as cluster-issuer.yaml.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: k3s-ca-issuer
spec:
  ca:
    secretName: k3s-ca-secret
kubectl apply -f cluster-issuer.yaml

Then I created a certificate for my registry domain using the config file registry-certificate.yaml:

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: registry-cert
  namespace: kube-system
spec:
  secretName: registry-tls
  issuerRef:
    name: k3s-ca-issuer
    kind: ClusterIssuer
  dnsNames:
    - registry.home.stuff
kubectl apply -f registry-certificate.yaml

Finally, I updated the Traefik Gateway to support https:

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: traefik-gateway
  namespace: kube-system
spec:
  gatewayClassName: traefik

  listeners:
    - name: http
      protocol: HTTP
      port: 8000
      allowedRoutes:
        namespaces:
          from: All
    - name: websecure-registry
      protocol: HTTPS
      port: 8443
      hostname: registry.home.stuff
      tls:
        mode: Terminate
        certificateRefs:
          - name: registry-tls
            namespace: kube-system
      allowedRoutes:
        namespaces:
          from: All
    - name: tcp
      protocol: TCP
      port: 3000
      allowedRoutes:
        namespaces:
          from: All
kubectl apply -f traefik-gateway.yaml -n kube-system

Setting up the local domain name

I have been using the node names as the domain for my services on K3S along with various paths. For example, http://raspberrypi1/gitea would navigate to my Gitea instance. However, Docker treats single-part domains as a path to a repository on docker.io, so if I tried to use raspberrypi1/registry as the address for my registry, Docker would resolve it as docker.io/raspberrypi1/registry. Hence I needed a full 3-part domain name.

With my network configuration, I can change this domain name using the ASUS Router configuration front-end. I logged into the router, then navigated to LAN under Advanced settings. There I set the domain name to home.stuff under the LAN IP tab. Under the DHCP Server tab I added a DNS entry mapping the hostname registry to the IP address of raspberrypi1. With this router, that gives the raspberrypi1 node the domain name registry.home.stuff.

The Traefik Gateway that I set up in my K3S cluster earlier allows me to use the domains associated with any of the nodes for any of the HTTP Routes I define on the cluster, so I could now use the registry.home.stuff domain name for an HTTP route to the Gitea registry. I used the following gitea-registry-httproute.yaml file:

apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: gitea-registry
  namespace: gitea
spec:
  parentRefs:
    - name: traefik-gateway
      namespace: kube-system
      sectionName: websecure-registry
  hostnames:
    - registry.home.stuff
  rules:
    - matches:
        - path:
            type: PathPrefix
            value: /
      backendRefs:
        - name: gitea-http
          port: 3000

I applied it using kubectl:

kubectl apply -f gitea-registry-httproute.yaml -n gitea

Create runner in Gitea

I had one more prerequisite before creating my Gitea runner, in the form of a token used by the runner to authenticate with Gitea. Gitea runners can be on the Instance, Organization, or Repository level. Since I only have one organization on my Gitea instance, an instance-level runner and an organization-level runner would cover the same set of repositories, so the distinction doesn’t matter for me. I’m fine with all of my repositories sharing the same runner (for now), so I created the token for my runner using the following steps.

  1. Log in as Site Admin to Gitea
  2. Go to Site Administration > Actions > Runners > Create Runner
  3. Copy the created token

To give my runner access to the token, I stored it as a secret in my K3S cluster.

kubectl create secret generic gitea-runner-secret --from-literal=token=<my-runner-token> -n <namespace>

Set up the Gitea runner

With the Docker Engine dependency resolved, I set up a runner to execute the Gitea Action jobs.

apiVersion: apps/v1
kind: Deployment
metadata:
  name: gitea-runner
  namespace: gitea
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitea-runner
  template:
    metadata:
      labels:
        app: gitea-runner
    spec:
      containers:
        - name: runner
          image: gitea/act_runner:latest
          env:
            - name: GITEA_INSTANCE_URL
              value: http://gitea-http:3000
            - name: GITEA_RUNNER_REGISTRATION_TOKEN
              valueFrom:
                secretKeyRef:
                  name: gitea-runner-secret
                  key: token
          volumeMounts:
            - name: docker-socket
              mountPath: /var/run/docker.sock
            - name: runner-data
              mountPath: /data
      volumes:
        - name: docker-socket
          hostPath:
            path: /var/run/docker.sock
            type: Socket
        - name: runner-data
          emptyDir: {}

I applied the change using kubectl.

kubectl apply -f gitea-runner.yaml

Conclusion: Finally ready for that pipeline

Getting Gitea from its vanilla install into a state where it is ready to publish my projects to my K3S cluster using Gitea Actions took a fair bit more work than I was expecting, but I’m happy with where I ended up. After all of that, here is where the project stands:

  • Gitea Actions is active on my Gitea instance
  • A Gitea Runner is ready to spin up and run Docker containers in the pipelines
  • Gitea has its native container registry up and running and available to both K3S’s containerd container runtime and the Gitea Runner’s Docker Engine
  • I now have full 3-part domain names available for my K3S cluster’s services
  • I now have SSL enabled for any of the services on my K3S cluster

The next step is to finally publish to my K3S cluster via Gitea!