Pardon me if this is known, but I couldn’t find any good documentation on what I’m seeing here.
I’m trying to build docker images inside of an act runner that I have running as a stateful-set of pods running inside a kubernetes cluster.
Various things I’ve tried have all yielded in effectively a failure to talk to docker, i.e:
Cannot connect to the Docker daemon at tcp://localhost:2375. Is the docker daemon running?
My workflow at present is as such:
name: Deploy to production
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
container:
image: catthehacker/ubuntu:act-latest
env:
DOCKER_HOST: tcp://localhost:2375
ports:
- 2375
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- /certs
steps:
- name: Checkout Code
uses: actions/checkout@v4
- name: Run docker compose build
run: docker compose -f docker/webserver/docker-compose.yml build
And the act runner I’m using looks like this:
apiVersion: v1
kind: Pod
metadata:
labels:
app: act-runner
apps.kubernetes.io/pod-index: "0"
statefulset.kubernetes.io/pod-name: act-runner-0
name: act-runner-0
namespace: gitea
spec:
containers:
- env:
- name: DOCKER_HOST
value: tcp://localhost:2376
- name: DOCKER_CERT_PATH
value: /certs/client
- name: DOCKER_TLS_VERIFY
value: "1"
- name: GITEA__log__LEVEL
value: debug
- name: GITEA_INSTANCE_URL
value: https://example.com
- name: GITEA_RUNNER_REGISTRATION_TOKEN
value: supersecrettoken
- name: GITEA_RUNNER_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
path: namespace
If there’s anything obvious I’m missing, please let me know; would be nice to get this figured out.