Jeff Dagenais's picture

Hi Jeremy, all...

We have just migrated our gitlab instance into a turnkey gitlab lxc instance. I am happy to see the switch to the omnibus install, so kudos for that.

I am mixed about the tklbam design which, unless there is something I'm missing, when using the stock config, make a single huge tar with the gitlab-backup command (configured in /etc/tklbam/hooks.d). By default, this takes ALL repositories, artifacts, etc.

Then tklbam proceeds to upload this huge file on s3. On the next backup, it does it all again, and the incremental backup has the same weight as the full, since it includes the big tar file.

Since I am not a fan of wasting space (and I also know this doesn't scale well), I've decided to tell gitlab-backup to skip archiving the big stuff (repos, artifacts, docker registry) and add them as regular files in /etc/tklbam/overrides.

This makes the incrementals smaller indeed... but not small enough. When doing two consecutive tklbam calls, the second one should produce a very tiny "inc" backup on the hub.

My observations show it is mainly our database which increases the size of the gitlab-backup tar file. I understand the omnibus setup bundles and configures the postgres outside of the regular "apt" configured instance. I guess this is what prevents tklbam from detecting the database and backing it up "the standard way".

So how can I go about telling tklbam about this DB if this is at all possible?

Thanks for any help!

Forum: 
Jeremy Davis's picture

Glad to hear that you're happy about the move to the Omnibus package. It certainly makes it easier to maintain for all of us! :)

Regarding the backups. That's very interesting. FWIW, I spent a ton of time working out the backup hook scripts to get it all working reliably. I was under the impression that TKLBAM should still do a diff of the file and only sync the bits that had changed. During my testing, the incremental backups were larger than ideal, but were still smaller than full backups. But I was testing with a fairly minimal data set and re-running backups soon after, which may explain why I didn't notice this.

Perhaps it might be better to extract the tarball that the GitLab backup produces and then backup the actual files (instead of a bundled archive) instead? Then an incremental backup should only include changes. Although the downside of that is that we'll end up with 2x data sitting on your server. We could clean up the data after upload I guess, but that may cause users issues which they may find hard to work out (i.e. running out of free space, but when you check it might appear to have plenty of free space).

Pondering this some more, the GitLab backup page notes a couple of things that are probably well worth a try. Namely BACKUP=dump and GZIP_RSYNCABLE=yes. According to the docs, that makes incremental backups work better. So might be the way to improve things? (TKLBAM doesn't use rsync, but I imagine that if it's more friendly for rsync, then it'll also most likely be more friendly for TKLBAM incremental backups). If you have some spare time and/or energy, it'd be great if you get a chance to check it out and let me know if that appears to help.

My only potential concern regarding that path, is that we'd still need to collect the GitLab version (IIRC it currently collects that from the original filename). I'm sure there'd be a relatively easy workaround for that, but I wanted to mention it.

Regarding your explicit question WRT to Postgres, yes unfortunately TKLBAM doesn't recognise the third party install of Postgres that GitLab does. TBH, I'm not really clear exactly how TKLBAM does the Postgres backups (well I know in theory what it does, but not how). I have had a look through the code, but it's Python2 (still) and I'm a bit novice still when it comes to Python (particularly Python2, I'm more comfortable with Python3). Plus I find the TKLBAM code particularly dense. So unfortunately, I don't have any ideas on how to make TKLBAM "see" the GitLab Postgres instance.

However, if you wanted to pursue your current path (i.e. manually collect the bits you want backed up; rather than using "GZIP_RSYNCABLE=yes" as noted above) then I have a few ideas.

The first is to create a custom TKLBAM hook script to dump the DB somewhere (that TKLBAM then backs up). IIRC the GitLab Postgres instance requires usage of a GitLab specific command (perhaps even 'gitlab-postgres' or similar).

The other would be to modify the existing hook script (which I wrote) to just run a GitLab backup of the DB. I.e. use the SKIP=... option to skip everything except the DB. I suspect that this should do the trick:

gitlab-backup create SKIP=uploads,repositories,builds,artifacts,lfs,registry,pages

Regardless, thanks for bringing this to my attention and if you're able to assist, hopefully we can come up with something that works a bit better than the current arrangement... Although please note, I'm currently trying to funnel all my energy into getting a v16.0RC of Core out the door. Once I have that published, then I'll have a bit more time and energy to work on stuff like this.

Jeff Dagenais's picture

Hi Jeremy!

I am sorry to report that the "BACKUP=dump and GZIP_RSYNCABLE=yes" strategy doesn't work. From the quick test I did, the DB is either not actually part of the archives, or not restored.

I did not pursue this further as the available time I had did not allow it.

Right now my server backs up the whole DB as a dump as you designed it and I added everything but the DB in SKIP since it's all on the filesyste, then added those directories to /etc/tklbam/overrides.conf.

I understand this will not produce atomic backups but we are small scale and these backups are for ultra catastrophic scenarios, so we'll deal with the errors on restore if there are any... if that ever happens.

Cheers!

Jeremy Davis's picture

It's unfortunate to hear that my ideas didn't work out... :( Thanks for giving them a try though and reporting back.

I'm not sure when I'll get a chance to relook at this a bit closer, but to ensure it doesn't get forgotten, I've opened an issue on our issue tracker.

It sounds like you've created a solution that suits your needs, but if you do end up revisiting this at some point and/or have any further thoughts, please don't hesitate to let us know.

Add new comment