504 - Gateway Timeout on Repository Creation

We are currently load testing our gitea installation and after creating a couple of thousands (usually around 3000) repos we see an increasing number of Status Code 504 - Gateway timeout errors when using the API for creating repositories.

It seems that these errors come in waves:

  • the errors start and stay for a while (meaning all repo creations fail for a while). I would describe it as “gitea seems to fall asleep”
  • at some point the system seems to “recover” and repo creation catches up (I can see a high load on the database for a moment)
  • I can see that the memory consumption stays quite high even if the test is stopped/paused
  • whenever the system has “recovered” everything goes smooth for a long time again

We are using the following setup:

  • Gitea 1.19.1 Release
  • Running in Kubernetes cluster, scalable from 2 to 10 pods
  • Azure Redis Cache, Azure SQL DB (DB_TYPE=mssql), repo indexer disabled

I obtained the following logs (these repeat, so I only copy one occurance):

2023/06/12 08:53:15 .../api/v1/repo/repo.go:270:CreateUserRepo() [E] GetRepositoryByID: context canceled
2023/06/12 08:53:15 ...common/middleware.go:72:1() [E] PANIC: runtime error: invalid memory address or nil pointer dereference
 /usr/local/go/src/runtime/panic.go:260 (0x458a3c)
 /usr/local/go/src/runtime/signal_unix.go:837 (0x458a0c)
 /go/src/code.gitea.io/gitea/models/repo/repo.go:544 (0x128be11)
 /go/src/code.gitea.io/gitea/models/repo/repo.go:574 (0x1d0b6da)
 /go/src/code.gitea.io/gitea/services/convert/repository.go:26 (0x1d0b6d1)
 /go/src/code.gitea.io/gitea/services/convert/repository.go:20 (0x228063c)
 /go/src/code.gitea.io/gitea/routers/api/v1/repo/repo.go:273 (0x228063d)
 /go/src/code.gitea.io/gitea/routers/api/v1/repo/repo.go:508 (0x2281204)
 /go/src/code.gitea.io/gitea/modules/web/wrap_convert.go:62 (0x2111673)
 /go/src/code.gitea.io/gitea/modules/web/wrap.go:40 (0x210fd64)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/pkg/mod/github.com/go-chi/chi/v5@v5.0.8/mux.go:444 (0x182bd15)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/src/code.gitea.io/gitea/modules/web/wrap.go:97 (0x21107ec)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/src/code.gitea.io/gitea/modules/web/wrap.go:97 (0x21107ec)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/src/code.gitea.io/gitea/modules/context/api.go:252 (0x1c8fd71)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/src/code.gitea.io/gitea/routers/api/v1/api.go:1288 (0x22b63d3)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/pkg/mod/github.com/go-chi/chi/v5@v5.0.8/mux.go:73 (0x1829a94)
 /go/pkg/mod/github.com/go-chi/chi/v5@v5.0.8/mux.go:316 (0x182b4a3)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/pkg/mod/github.com/go-chi/chi/v5@v5.0.8/mux.go:444 (0x182bd15)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/src/code.gitea.io/gitea/routers/common/middleware.go:80 (0x2242f22)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/src/code.gitea.io/gitea/modules/web/routing/logger_manager.go:122 (0x210be73)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/src/code.gitea.io/gitea/routers/common/middleware.go:112 (0x2241b15)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/pkg/mod/github.com/chi-middleware/proxy@v1.1.1/middleware.go:37 (0x2240836)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/src/code.gitea.io/gitea/routers/common/middleware.go:32 (0x2242d74)
 /usr/local/go/src/net/http/server.go:2122 (0x9b982e)
 /go/pkg/mod/github.com/go-chi/chi/v5@v5.0.8/mux.go:90 (0x1829a4f)
 /go/src/code.gitea.io/gitea/modules/web/route.go:191 (0x210f24d)
 /usr/local/go/src/net/http/server.go:2936 (0x9bce35)
 /usr/local/go/src/net/http/server.go:1995 (0x9b8351)
 /usr/local/go/src/runtime/asm_amd64.s:1598 (0x478a80)

I’m not able to identify the root cause of this issue as I can’t see any bottle neck. The DB runs fine, load is not too high, Redis is fine, pods should have enough resources.

Any idea or similar experience is much appreciated!
I’m happy to provide as much details as I can.

Thank you for your support!
Johannes