Thanks a lot for sharing @mariusrugan.
The sample is not complete since you need to bring your own secret and volume but it is fair to assume that people who already tried the setup already have those. Your example is very helpful, I wish I would have found that in the doc instead of the example that does not workā¦
I think you should add the namespace to the stateful set template.
...
reloader.stakater.com/auto: "true"
spec:
replicas: 1
selector:
matchLabels:
app: gitea-act-runner-dind
serviceName: gitea-act-runner-dind
template:
metadata:
namespace: gitea-runners # <<<<< HERE (my namespace differs)
labels:
app: gitea-act-runner-dind
spec:
...
Another suggestion I would make it to fix your data volumes so that running more than one replica also works.
For that, remove:
# - name: runner-data
# persistentVolumeClaim:
# claimName: act-runner-vol
Add something like:
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-runner-storage
namespace: gitea-runners
spec:
storageClassName: nfs-provisioner-ssd
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: "1Gi"
and ensure the volumeMounts
include:
volumeMounts:
- name: docker-certs
mountPath: /certs
- name: gitea-runner-storage # <<< THAT
mountPath: /data # <<< THAT
...
It will create a Persistence Volume Claim per replica and bind it to the pod started by the replicaset.
With that, setting replica: 2
for instance, will give you 2 runners:
Finally, I would suggest a better Name
for the runners.
Using the node name is not great: you may have several runners running on a node, and they all will show up with the same name.
Instead, you can use:
- name: GITEA_RUNNER_NAME
valueFrom:
fieldRef:
fieldPath: status.podIP
which gives a better way to uniquely id those runners (I canāt add another image so here is the text versionā¦):
Status ID Name Version Type Labels Last Online Time Edit
Idle 13 10.244.2.250 ...
Idle 14 10.244.1.243 ...