Set up Gitea Actions Runner
Background
Some years ago I set up a homelab server where I host applications like Plex and Jellyfin, but lately I wanted to set up a git server with CI/CD capabilities to self-host my personal projects.
After some thought I landed on Gitea, as it is a git server with the functionality I want. Because I wanted to set up CI/CD I needed a place to store the container images, and since Gitea comes with a builtin container registry, it made the choice easier.
I could have chosen other git servers and set up a Container registry like Project Quay, but I wanted an all-in-one solution for now.
Gitea
The Gitea setup was quite straight forward, and their official documentation makes it relatively easy. With some configuration I managed to integrate my Keycloak server as an OIDC provider for login, so now I have SSO for some applications. I haven’t set up SSH access, but I might do it in the future.
After Gitea was set up, I wanted to add a build agent to automate builds and push container images to my container registry.
Gitea build agent
Adding the build agent was straight forwards. Add a container with some configuration and a registration token, and you’re ready to go.
The Gitea Actions Runner is based on a project for running and testing GitHub Actions locally, so all GitHub Actions should work out of the box with the Gitea Action runner.
There are some minor snags to keep in mind, though.
1. Labels determine how the agent functions.
If a label does not refer to a container image the actions are run inside the action runner container. This may be preferable in some cases, but the official GitHub actions require node.js to be installed. If node.js is not present, you’ll be unable to clone the repository.
Unfortunately, the Gitea act_runner does not come with node.js pre-installed, so you should build your own version of the act_runner image with node.js installed if you don’t want to spin up an extra container to do the build.
More information about this is found in the official Gitea Act Runner documentation.
2. The Docker Socket
The docker socket is necessary for communicating with the Docker compatible host. This may be CRI-O, Docker, Podman or any other CRI compatible host.
My setup is set up with rootless Podman to keep the containers isolated from the rest of the machine, which should make it difficult to do much harm if anyone should be able to escape the container.
When configuring the Gitea Build Agent (the actions runner) it is recommended to mount the Docker socket in the container, and when I figured out the labels, it seemed that the agent was running as it should. It is worth mentioningS that the build agent used the default configuration to start with.
When I was about to try a build, I noticed that act_runner container was downloading; then suddenly the build failed with the following message:
failed to create container: ‘Error response from daemon: container create: statfs /var/run/docker.sock: permission denied’
The message left me somewhat dumbfounded as I didn’t understand why I got this error when the image had downloaded correctly.
After some trial and error and playing around with the act_runner configuration file, I found the reason. The act_runner container tries to mount /var/run/docker.sock in the “ubuntu-latest” build container. Because my system is set up the way it is, this file does not exist on my host.
Because I use rootless Podman, the correct socket location is /var/run/user/1000/podman/podman.sock, which is the socket I mounted in the Gitea act_runner container.
After adding a configuration file and running the build again, it becomes clear that the act_runner tries to mount the docker socket from /var/run/docker.socket, which does not exist on my host machine, only in the act_runner container.
Merged container.HostConfig ==> &{Binds:[/var/run/docker.sock:/var/run/docker.sock] ContainerIDFile:
LogConfig:{Type: Config:map[]}
NetworkMode:GITEA-ACTIONS-TASK-54_WORKFLOW-Build-and-Push-Container-Image_JOB-build-build-network PortBindings:map[]
RestartPolicy:{Name:no MaximumRetryCount:0} AutoRemove:true VolumeDriver: VolumesFrom:[] ConsoleSize:[0 0]
Annotations:map[] CapAdd:[] CapDrop:[] CgroupnsMode: DNS:[] DNSOptions:[] DNSSearch:[] ExtraHosts:[] GroupAdd:[]
IpcMode: Cgroup: Links:[] OomScoreAdj:0 PidMode: Privileged:true PublishAllPorts:false ReadonlyRootfs:false
SecurityOpt:[label=disable] StorageOpt:map[] Tmpfs:map[] UTSMode: UsernsMode: ShmSize:0 Sysctls:map[] Runtime:
Isolation: Resources:{CPUShares:0 Memory:0 NanoCPUs:0 CgroupParent: BlkioWeight:0 BlkioWeightDevice:[]
BlkioDeviceReadBps:[] BlkioDeviceWriteBps:[] BlkioDeviceReadIOps:[] BlkioDeviceWriteIOps:[] CPUPeriod:0 CPUQuota:0
CPURealtimePeriod:0 CPURealtimeRuntime:0 CpusetCpus: CpusetMems: Devices:[] DeviceCgroupRules:[] DeviceRequests:[]
KernelMemory:0 KernelMemoryTCP:0 MemoryReservation:0 MemorySwap:0 MemorySwappiness:0xc0002bf9c8
OomKillDisable:0xc0002bf8c3 PidsLimit:0xc0002bfa28 Ulimits:[] CPUCount:0 CPUPercent:0 IOMaximumIOps:0
IOMaximumBandwidth:0} Mounts:[{Type:volume
Source:GITEA-ACTIONS-TASK-54_WORKFLOW-Build-and-Push-Container-Image_JOB-build-env Target:/var/run/act ReadOnly:false
Consistency: BindOptions:<nil> VolumeOptions:<nil> TmpfsOptions:<nil> ClusterOptions:<nil>}
{Type:volume Source:GITEA-ACTIONS-TASK-54_WORKFLOW-Build-and-Push-Container-Image_JOB-build Target:/workspace/***/blog
ReadOnly:false Consistency: BindOptions:<nil> VolumeOptions:<nil> TmpfsOptions:<nil> ClusterOptions:<nil>}
{Type:volume Source:act-toolcache Target:/opt/hostedtoolcache ReadOnly:false Consistency: BindOptions:<nil>
VolumeOptions:<nil> TmpfsOptions:<nil> ClusterOptions:<nil>}] MaskedPaths:[] ReadonlyPaths:[] Init:<nil>}
The initial workaround
The initial workaround was to create a link on the host from /var/run/docker.socket to /var/run/user/1000/podman/podman.socket.
However, this didn’t survive host reboots.
The solution
In the end I realized that I could mount the /var/run/user/1000/podman/podman.socket on the same path inside the act_runner container. This makes the container match the host, and the volume mount works as expected.
To ensure that I use the correct socket, I updated the container.docker_host
variable in the act_runner configuration
to point to /var/run/user/1000/podman/podman.socket. When using a non-standard docker host, the DOCKER_HOST
environment variable must point to the new location of the docker host.
The Docker Compose example below shows how to set up a Gitea Action Runner with a non-standard docker host.
version: '3.8'
services:
gitea-runner:
image: docker.io/gitea/act_runner:latest
container_name: "gitea-act-runner"
environment:
- CONFIG_FILE=/config.yaml
- GITEA_INSTANCE_URL=${GITEA_INSTANCE_URL}
- GITEA_RUNNER_REGISTRATION_TOKEN=${GITEA_RUNNER_TOKEN}
- GITEA_RUNNER_NAME=${GITEA_RUNNER_NAME}
- GITEA_RUNNER_LABELS=${GITEA_RUNNER_LABELS}
- DOCKER_HOST=unix:///var/run/user/1000/podman/podman.sock
security_opt:
- label=disable
volumes:
- /var/run/user/1000/podman/podman.sock:/var/run/user/1000/podman/podman.sock:ro,z
- gitea-runner-data:/data:z
- type: bind
source: ${GITEA_CONFIG_FILE}
target: /config.yaml
read_only: true
volumes:
gitea-runner-data:
networks:
traefik_network:
external: true
In addition to the Docker Compose file, you must make sure to update the configuration appropriately.
container:
# Whether to use privileged mode or not when launching task containers (privileged mode is required for Docker-in-Docker).
privileged: true
# And other options to be used when the container is started (eg, --add-host=my.gitea.url:host-gateway).
options: "--security-opt label=disable"
# overrides the docker client host with the specified one.
# If it's empty, act_runner will find an available docker host automatically.
# If it's "-", act_runner will find an available docker host automatically, but the docker host won't be mounted to the job containers and service containers.
# If it's not empty or "-", the specified docker host will be used. An error will be returned if it doesn't work.
docker_host: "unix:///run/user/1000/podman/podman.sock
One caveat with this solution is that all containers run on the host machine. Unless you’re running the act_runner on a different machine, you would share your container host with the rest of your services. This is fine if you’ve got total control, but if you are sharing your Gitea instance with others, then you should consider if this is something you want.
One possible solution is to have custom images with Podman installed. This way you should be able to run containers during build if you’re, for example, using something like TestContainers. Of course, if the container host socket is not shared, then you cannot deploy containers directly from Gitea to your host, but I don’t think it’s a route anybody would take.