I have a production gitea running v1.15.9 and postgresql 15 in docker on Ubuntu 20.04.x. After attempting to upgrade it twice (restored from backup) to v1.22.1, it has been experiencing a “strange behavior” which I will do my best to explain here after I first explain how I got to this point.
Testing Gitea Backup from Test Server
Since I know there is no downgrade path for gitea major versions, and because I know not to test upgrades in production, I created a gitea backup as per the documentation & then rsync that backup with my docker-compose.yml to a test server also running Ubuntu 20.04.x.
I then followed the gitea docker restore process per the documentation & was able to re-create my gitea production environment on the test server. Next, I upgraded this test gitea from v1.15.9 to v1.22.1 and it worked as expected.
These are the steps I documented to upgrade my gitea running in docker:
- Stop the docker container with
docker-compose down
. - Change the gitea version in the docker-compose.yml. That field is
image: gitea/gitea:1.22.1
and change to the version you want to upgrade to. - In the same directory as the docker-compose.yml, run
docker-compose pull
. - In the same directory as the docker-compose.yml, Start a new container which automatically removes old one with
docker-compose up -d
. - Now navigating to the gitea is giving a “502 Bad Gateway” & just had to be patient for it to reload again.
- Now your gitea should be running the version it was upgraded to.
Upgrading The Production Server
After the test server was upgraded successfully, it was then time to implement the same process on the production gitea server.
First Upgrade Attempt on Production
- The first time I attempted the upgrade from gitea v1.15.9 → v1.22.1, everything appeared to be fine except code repos that had not had any code updated in them for years started appearing to have been updated “5 minutes ago”. I then noticed that the gitea server was parsing thru all the repos and only doing this with some of them, but not all of them. I figured that it would all settle down, as my test server did not have this issue, so I gave it all the time of over last night to see what I would find this morning.
- This morning, I find that the repos have stopped doing this “strange behavior” but that certain repos which have not been updated for a long time would have different updated times within the last 24 hours. This is not what I experienced on the gitea test server running the same repos & configuration that I first tested on.
Second Upgrade Attempt on Production
- I figured maybe it was just a glitch, so I started with a fresh ~/gitea directory this time and restored from my recent backup. After I had this gitea server back to what it was running before the first upgrade attempt, I logged in to see if any of this “strange behavior” was happening before any upgrades, but it was running as expected on v1.15.9 with none of these issues described above.
- I then went thru the whole process of upgrading, per the steps I documented above, and after spending all day on it - I have the same “strange behavior”.
Does anyone have any ideas on what is happening or how to fix this?
- For the time being, I am now restoring production back to gitea v1.15.9 until I can find out what is causing this issue and how to fix it. I tried searching around the forum and I could not find anyone else having this type of issue. If I had this same issue on both my test and production servers, then I would assume it is some form of a bug, but it only occurred on one out of the two servers.
- One idea I had was to upgrade the server to v1.19.4 to see if I have this same issue & then upgrade to v1.22.1 after that, but it’s a lot of work backing up & restoring which is why I’m posting here to see what the community has for ideas, thoughts, and suggestions.
Help please