Jeremy Davis's picture

This is a thread to discuss the (ongoing) work of developing a new install method for the next (v15.2) release of the TurnKey GitLab appliance. It is a continuation of the discussion (inadvertently) kicked off by Royce by this comment. FWIW, there is also a related open feature request regarding this on our tracker.

Summary of things so far:

  • There are maintenance issues with the current GitLab "source install" method (both for TurnKey and for end users).
  • As such, following discussions (both internal and public) it has been decided the best path forward is to use the GitLab Omnibus install for future releases (v15.2 onwards) of the GitLab appliance.
  • Jeremy (i.e. me) will do the preliminary work to develop a "base" GitLab appliance which has GitLab installed via Omnibus package.
  • Tim and Royce (and anyone else who is interested) will assist with testing and/or further development (via feedback and/or code and/or discussion, etc).
  • Tim and Royce (and others?) will also assist with testing migration from recent versions of TurnKey GitLab (i.e. =
  • Once we have a working product that at least matches the functionality of the current GitLab appliance, plus some base documentation on how to migrate, etc, we'll release it as the next (v15.2) GitLab appliance release.

We'll then look at further improvements to make it (much) better than the current appliance. That will include easy config to:

  • Configure GitLab settings to support AWS as external data-store for large data items (i.e. git-lfs using AWS S3 as backing).
  • Confiure GitLab settings for Mattermost connectivity.
  • Configure SSL Certificates (TKL already provides basic server-wide Let's Encrypt TLS certs - but we'll make sure it fulfills the needs of GitLab, including any sub-domains etc).
  • Configure a RAM-disk swapfile.

Anything I've missed guys?!

Let the discussion continue... :)

Forum: 
OnePressTech's picture

Here is a good VM config wizard sample doc from GitLab:

https://docs.gitlab.com/ce/install/aws/

This article has two interesting reference points:

1) How to create a high-availability GitLab configuration on AWS

2) How to configure the AWS GitLab AMI

These are both useful references for mapping out the TKLX AMI configuration wizard steps.

There is also the Bitnami GitLab documentation:

https://docs.bitnami.com/general/apps/gitlab/

 

NOTE:

For those reading this who wonder why bother creating a TKLX GitLab VM when there is already a GitLab AMI or a Bitnami GitLab VM...

1)  Supporting additional VM formats

2) TKLBAM incremental backup

3) Debian security updates

4) VM tools to manage the VM

5) TKLDev to manage variations

6) The TKLX community

7) Jeremy Davis (what can I say Jed...you are a big reason I am still with TKLX :-)

 

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Your encouragement and kind words are warmly welcomed! :) Also those resources look like they'll be handy once we get to that bit.

Unfortunately, I don't have much to report yet (other than I REALLY hate Omnibus now I know more about it! Bloody Chef and Ruby and ... Argh!). I've already spent hours and can't yet get GitLab to install to the build environment (chroot). I keep thinking I'm close, but no cigar yet... :(

The more time I spend with web apps written in Ruby, the more I wonder why the hell anybody in their right mind would do that!? Anyway, that's enough whinging... Hopefully, I'll have a breakthrough soon and have something to share...

Jeremy Davis's picture

Ok, so there's still plenty of work left to do, but the major hurdle of getting the GitLab Omnibus package to install appears to be complete (some tidying up left to do, but it appears to install ok...).

I currently have it building to ISO so I can do some more development. FWIW SystemD doesn't work properly within a chroot, so building to an ISO (then installing to a VM) makes everything a bit more "real world".

I don't plan to do a ton of battle testing yet. I'll do just enough to make sure that everything appears to work. Then I'll test a few ideas I have for regenerating secrets. Plus make sure that we can set the first (admin) user email and password etc.

If you're at all interested, I have pushed the build code that I have so far (definitely not ready for production!) to a new branch on my personal GitHub. If you do wish to build yourself on TKLDev it should "just work". Essentially though, at this point it's just a vanilla GitLab-CE Omnibus install (no GitLab related inithooks, etc).

If you plan to do any development and/or build multiple times, I recommend that you run make root.patched initially, then copy out the deb package from the apt cache. E.g. from scratch in TKLDev:

cd products
git clone  -b omnibus-pkg-install https://github.com/JedMeister/gitlab.git
cd gitlab
make root.patched
APT_CACHE=var/cache/apt/archives
cp build/root.patched/$APT_CACHE/gitlab-ce_*.deb overlay/$APT_CACHE/

Not having to download the ~450MB omnibus package each rebuild will certainly make things a bit quicker! Although please note that it's still quite slow. In part because the install takes a while anyway (which won't change). But in part because I'm currently committing large areas of the filesystem to git repos to see exactly what is going on! That will be removed before the final build.

If you just want to build an ISO (or other builds) then you can just cd to products dir and do the git clone first (as per above), then skip the rest and use buildtasks to build the ISO. Once you have an ISO built, you can build the other builds. I.e.:

cd buildtasks
./bt-iso gitlab
# when done you should have the following file:
# /mnt/isos/turnkey-gitlab-15.2-stretch-amd64.iso
# then to build an alternate build, such as Proxmo/LXC
./bt-container gitlab-15.2-stretch-amd64
# or OVA/VMDK:
./bt-vm gitlab-15.2-stretch-amd64

As soon as I'm a bit further along, I'll upload an ISO and a Proxmox/LXC build for you guys to test. This is mostly just so you can see some progress and have a bit of a play if you're super keen.

OnePressTech's picture

Nice work Jed. Disappointingly on my end I will need to wait 2 weeks while our intrepid national Telcos try to get me on NBN. After a week of Optus stuffing around unsuccessfullly I am running on expensive mobile data in the interim so big downloads are out for now. I'm in the process of trying to switch to Telstra. Fingers crossed that they can actually get me a working service any time soon. Telstra just announced more technical jobs being shipped to India. Apparently there are no available qualified Telecom people in Australia. Huh...I'm a ex-Telstra qualified Telco engineer...no one called me!

So I am expecting another two weeks of delay before I can test the new VM.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

For fear of getting a bit too political here, seriously the NBN (explicitly MTM/FTTN) has been the most massive Government fail! I'm one of the lucky ones who got "proper" NBN (FTTH/FTTP) and it's been pretty good for me since I got connected (a couple of years ago now) but I've heard plenty of horror stories.

Bit of a pity you won't get a chance to test much within the next couple of weeks. But if I can get it to a point where I'm pretty happy with it before then, I'll see what I can do getting you access to an AMI.

OTOH, there is also a possibility that I'm happy enough within the next 2 weeks that I may just publish it. But there is nothing stopping you from letting me know about any shortcomings, bugs or improvements we can make. If they're critical bugs, we can re-release ASAP. If they're not so critical, we can bundle them into the next release (which we should be able to do within a month).

One major benefit of the install via Omnibus package will be that generally it will be a breeze to rebuild with an updated version of GitLab! :)

Good luck with your networking woes. Hopefully Telstra comes to the party!

Jeremy Davis's picture

Just a quick update.

The install of GitLab appears to work ok and everything seems to be running as it should. However, the initial interactive inithooks that I have devised do not work as they should. The WebUI still requests that you set a password when you first access it. The email/password set by the inithook doesn't allow login and the password set via WebUI also fails... :(

So lots more work left to do, but definitely progress. FWIW I've pushed my changes back to the WIP GitHub repo (in my personal GH account).

Jeremy Davis's picture

Hi there and thanks for your interest and willingness to be involved. I really appreciate it.

Unfortunately, I've had a few other things I've needed to prioritise this week, but I have made a little more progress since last time I posted. I think it's pretty close to being ready for further testing, but it seems unlikely that I'll get there today (so may not be until next week).

One thing that would be incredibly helpful would be to get an understanding of what areas of the filesystem might need to be included in a backup? Anyone who is already using the Omnibus install (Tim? Royce?) might already have some ideas!? It's something I haven't looked at yet at all, but will need to before we do an official release. Any pointers that might save me time would be really appreciated.

Otherwise, if you'd like to build your own ISO using TKLDev, then that's an option (so you don't need to wait for me if you're super keen). There is a doc page which should help get you up to speed. To build the "new" GitLab (dev) appliance, when you get to building your second iso (it's recommended to just build Core first to ensure that you understand what is going on and ensure everything is working properly), then you can use the 'omnibus-pkg-install' branch of my fork of the gitlab repo. I.e.:

cd products
git clone --branch omnibus-pkg-install https://github.com/JedMeister/gitlab.git
cd gitlab
make

(Please also see my notes in this post above).

I have been testing the code (committed as of late last week) and have just committed and pushed the (minor) only changes that I have made to the code since I built the instance I've been testing. It's also worth noting, that I plan to remove the "schema" setup from the inithook to a confconsole plugin, as it triggers the Let's Encrypt cert generation. IMO it doesn't really make sense to do it at firstboot as it's unlikely that most users will have everything in place at that point and choosing to generate Let's Encrypt certs without DNS pre-configured will cause failure.

At this point, I doubt I will get any further this week, but hope to have something ready for others to test early next week.

OnePressTech's picture

Hey Jed,

Sorry I'm not being much help. I have extracted myself finally from Optus but am now on the Telstra joyride. We'll see how that goes.

I expect to be back on this next week.

Regarding Backup, are we trying to identify volatile / non-volatile folders / files to know which should / should not be backed up by TKLBAM?

GitLab backup may provide some helpful guidance:

https://docs.gitlab.com/ce/raketasks/backup_restore.html

IDEA:

Since we are already following an unconventional journey with this particular VM by installing via omnibus rather than source...why stop there :-)

We could just create a local folder on the GitLab VM, configure / trigger a GitLab backup that dumps everything into that folder, and configure TKLBam to just back up that backup folder and consider everything else as poart of the mirrored image.

So on a restore, we do the opposite, TKLBAM restores the backup folder on a freshly created GitLab VM and then a GitLab restore is triggered. I assume we would not touch TKLBAM so it would probably be a cron script that triggers a restore if the folder date changes...or something like that.

Thoughts?

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

I'm appreciative of any input I get! So it's all good. Plus I totally understand the telco pain...

Regarding Backup, are we trying to identify volatile / non-volatile folders / files to know which should / should not be backed up by TKLBAM?

Exactly! :)

GitLab backup may provide some helpful guidance: https://docs.gitlab.com/ce/raketasks/backup_restore.html

Fantastic, thanks! That will help tons!

We could just create a local folder on the GitLab VM, configure / trigger a GitLab backup that dumps everything into that folder, and configure TKLBam to just back up that backup folder and consider everything else as poart of the mirrored image.

Hmm, that sounds like a very reasonable suggestion. We'd then look to avoid backing up all/most of the rest of GitLab. Although FWIW generally anything included in an apt package (with the exception of the config in /etc and sometimes data within /var), would not be desirable to include anyway.

FWIW, it appears that re GitLab, much (if not all?) of the data in /var (/var/opt/gitlab) is generated by the 'gitlab-ctl reconfigure' command (generated from settings in the /etc/gitlab/gitlab.rb).

Regarding triggering the GitLab backup (and restore), assuming that i can be done via commandline, TKLBAM hooks could easily cope with that!

My only concern regarding your idea would be; what happens if the GitLab version that that creates the backup, is different to the version it is being restored to? I have no idea how the Omnibus package might cope with that?!

And between GitLab having a rapid release cycle and the ease of update that the Omnibus package will provide, there is a very real chance that the data will be for an alternate version of Gitab than what it ws created from. FWIW, the current backup somewhat works around that, but including GitLab itself (which is a double edged sword as it requires the user to manually keep the install up to date, and also makes it likely that it won't "just work" when migrating between major versions).

I guess we could consider a restore hook to try to match the installed version with the backup version. But the more stuff scripted during restore, the more factors need to be considered, and the greater the risk of something breaking...

Note though, that this isn't really a specific concern related to your suggestion. It would apply to many other appliances and a "normal" TKLBAM style backup too. We have a few other appliances that have 3rd party repos. But the fact that the GitLab Omnibus includes so much software, I anticipate that the implications would be significantly larger and due to the shear volume of software installed, significantly more likely for issues to occur.

Having said that, TKLBAM's primary aim is to provide reliable backup and restore for a specific single server. Usage as a data migration tool, is certainly in scope, but is secondary to the primary objective. Plus is not guaranteed without the potential need for manual intervention. Still I think it requires some thought.

The other thing that occurs to me is that ideally we would want an uncompressed backup, stored within a consistent place (e.g. no date stamped directories). Otherwise the daily diffs (i.e. daily incremental backups) would potentially be ridiculously large. I am unsure whether the GitLab backup tool would support that, but I'll have a look sometime soon.

Regardless, awesome suggestion and definitely food for thought! :)

OnePressTech's picture

Thanks for everything Jed.

Regarding cross-version backup / restore or stale restores, this is always an issue.

To mitigate the risk the GitLab backup could be set on a daily schedule so that the backup TKLBAM stores is always current.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

If we run the backup as a pre-backup hook, then the backup should always be up to date. Although we need to consider how we might handle things, if say the GItLab backup fails. I assume the best course would be for the TKLBAM backup to also fail (ideally loudly) in that case.

Anther consideration that has just occurred to me too, is that by default TKLBAM will possibly want to backup the (Postgres) DB. I assume that a GitLab backup would include the relevant DB data? If so, we won't want the DB being included separately as well. TBH I'm not sure how TKLBAM determines which DB is in use? We might be lucky and because it's not in the default place (i.e. installed by Omnibus) it doesn't even try to back it up. Worst case though, we could explicitly exclude it.

PS, it doesn't look like I'm going to get any more time to play with it today, but so far the basic install and firstboot seems fairly reliable. The inithooks are still disabled by default, but the interactive one I've been working on appears to work well. And I've tested just manually removing the secrets (from /var/opt/gitlab/)and (re)running 'gitlab-ctl reconfigure' seems quite happy to regenerate them. So that looks like it will work fine. I think I mentioned that the inithooks currently give the option to do the Let's Encrypt at first boot, but I plan to move that out to Confconsole I reckon (unless you have a good reason not to).

OnePressTech's picture

If TKLBAM triggers a GitLab backup before it does its diffs that would work.

LetsEncrypt from the console is a good plan. Some DevOps may want to use their own cert rather than LetsEncrypt. Cert-upload should eventually be added as an option in confconsole so the DevOps supplied cert can be auto-installed as part of the installation process.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Awesome, sounds like we're on the same page then. That's great.

Unfortunately, it looks like I have another (programming related) task that I'm going to need to prioritise early next week. Whilst that should be quite fun, I was hoping to get GitLab wrapped up (at least the initial "rc" of the rebuilt appliance) early next week. It seems likely that that will not be a realistic goal now and probably mid-week will be the earliest. Regardless, I'll certainly see what I can do. TBH, it's really close I reckon.

Take care mate and chat more soon.

Jeremy Davis's picture

Thanks for helping with testing! :)

When you build the iso, it should pre-install the latest version of GitLab-CE by default. From then on, you can follow the GitLab documentation on upgrading (use the instructions for Debian).

If you update regularly, then updating GitLab to the latest version should be as simple as running this:

apt update
apt install gitlab-ce

The only additional thing that you may need to do is update the config file (/etc/gitlab/gitlab.rb). I.e add/update any options that have changed for the new version.

However, it's important to note that if you fall behind the upstream updates, you will need to update to the latest release of the major version you are on, before updating the new major version. GitLab versioning is explained on this page. Please specifically see the section on upgrade recommendations . Please note that page is for GitLab-EE but also applies to GitLab-CE. (links updated to point to CE docs)

OnePressTech's picture

Gitlab docos are symmetrical...just change /ee/ to /ce/ in url

So in Jed's email above for CE the upgrade recommendations for CE is at:

https://docs.gitlab.com/ce/policy/maintenance.html#upgrade-recommendations

AND for EE is at:

https://docs.gitlab.com/ee/policy/maintenance.html#upgrade-recommendations

JED: Please adjust your post...it is not always true that EE instructions apply to CE.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Thanks mate. I'll update my post above. I just followed the links that generic GitLab docs provided. They obviously favour the EE docs when they need to provide links from generic docs to specific docs (which makes sense. I'll keep your point in mind for future links! :)

Jeremy Davis's picture

I have made a bit of a start on migration documentation. But it didn't take very long for me to start hitting details that required further elaboration and consideration. It's already chewed up fair bit of time and there is no end in sight...

So I've had to put that aside for now. I won't bother posting what I have yet as it's incomplete and a little contradictory (as I discovered more details). But I already have some quite useful insight into how it might work under different scenarios.

If anyone wants pointers on how I anticipate migration from a version of our existing appliance (or any source install for that matter), to the upcoming (Omnibus install) GitLab appliance might work, please ask. To provide relevant and specific thoughts, I'll need info on the TurnKey version (or Debian/Ubuntu version if not TurnKey) that you currently have and the version of GitLab that you are running on it.

I'm almost certain that providing info and pointers for a specific scenario will be much easier that needing to consider all possibilities...!

OnePressTech's picture

I agree Jed. Though I am sure TKLX clients would all love fully life-cycled VMs, that is a huge  investment that no VM supplier (Bitnami, AWS, Google, Azure, etc) has committed to. TKLX provides shrink-wrapped VMs...it is up to the VM users to life-cycle them. This is the same no matter who people source their VMs from.

So other than some basic guidance on migrating from old GitLab VM to new GitLab VM I think your approach makes sense. Keep it simple and let the community backfill :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

After a few sidetracks, I finally swung my attention back to GitLab today. And I have good news to report!

I have inithooks and tklbam working (at least in theory). I have tested most of my code on a running dev server, but not from a fresh install from ISO. So there are almost certainly going to be bugs (I'm hoping just typos and other easily fixed minor flaws). I just built a fresh ISO from my build code and have pushed my latest updates back to my repo (as noted above within this thread).

I'm knocking off for the day, so won't get any further now. If anyone gets a chance to test it out in the meantime, I'd love some feedback. Please post here with any issues you strike. If/when I hit any bugs when testing tomorrow, I'll post back here too so we are all on the same page.

Once I have done a bit more testing, and ironed out any of the above mentioned bugs (that I'm sure will exist), I think we'll be ready for proper "battle-testing". I also need to write up a bit of documentation and I'll clean up the commit history before I push to the proper TurnKey repos. But all-in-all I'm really happy with the progress.

I don't want to jinx myself, but I'm thinking we may be able to publish next week. Even if I can't get any of you good people to assist initially, if I can confirm that everything works (e.g. inithooks and a backup from one server and restore to another) then I might even push ahead with publishing v15.2. If we strike any new bugs I miss, then no reason why we can't do another release soon after if need be...

OnePressTech's picture

Cheers Jed for all your hard work. In a few days I will be able to do some testing and am available to work with you on documentation as well.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

First minor flaw discovered and fixed! (I forget to re-enable the gitlab inithooks - doh!)

I'm now looking at improving the inithook (password setting component) as it displays the GitLab password set by the user in plain text (which I really don't like for obvious reasons). I'm not 100% sure but I think it should be pretty easy...

I also haven't had a chance to test the backup. At this point, it will fail loudly if the GitLab version is different to the version backed up. I have an idea on how we might automate that, but I'm not sure how reliable it will be (and won't work for backups of previous TurnKey source installed appliances). Once I've had a chance to do some basic testing, I'll upload the tklbam profile somewhere to make it easy to test.

Jeremy Davis's picture

I'm pretty happy with everything so far, but logging in after firstboot is problematic... It appears that something isn't running as it should at first boot. But unfortunately, I'm struggling to debug it.

If I try to log in with the credentials I supply interactively on firstboot (via the inithook I have created) then at best I get "Invalid Login or password.", but I've also been getting random 422 and 500 errors too! And nothing remotely useful from the GitLab logs - at least not that I can find... The 422 errors seem to be related to CRLF tokens (to avoid XSS attacks) and the 500 errors are usually something up with GitLab backend (at least that was the case with our old source install). Unfortunately, the only place where anything regarding these errors appears to be being logged in (Omnibus) Nginx logs and it's essentially just giving me the same info as the webpage (i.e. the error number). Not very helpful!

The weirdest thing is that if I manually reinvoke the inithook (entering exactly the same info that I did the first time), everything works as it should and I can log straight in! And that is the case whether I re-run them interactively or non-interactively. It only seems to be on firstboot that it doesn't work.

I'm almost certain that it's either something not ready at the point of the firstboot when I'm trying to do stuff, or a race condition/ Although I haven't been able to find anything meaningful in the logs, so I'm only guessing.

I'm pretty sure it's not explicitly anything wrong with GitLab, but I can't help but get frustrated with it! Anyway, I do have some ideas on how to force it to log the output on firstboot (inithooks really should have better logging options IMO), so will try that and then I'll probably knock off for the day.

If anyone wants to test backups, you'll want the tklbam hook script and conf file from the buildcode overlay (they go in /etc/tklbam/hooks.d/) and the profile, which I've just (temporarily) uploaded to the repo as well (here - if the link doesn't work, then please let me know, although I will likely delete it at some point soonish). Note that I have not properly tested the hook script, or the tklbam profile, so DO NOT use either on a production server!!! It's unlikely to cause any damage, but I can't guarantee it. It's also likely to not work...

Jeremy Davis's picture

I just wanted to note that I've worked through the issue that stumped me late last week. I think I'm also overdue for a progress report anyway....

So it seems that I was a bit quick to lambaste GitLab on this occasion... (What really...?!) :)

Generally, the issue was an intersection of SystemD, GitLab's default systemd service/unit file, and the fact that the inithooks run really early in the boot process. Whilst a bit of a pain, it has allowed me to get a deeper understanding of SystemD and GitLab too.

The issue specifically was that GitLab wasn't yet running when the inithooks were trying to configure it, hence why it failed miserably on firstboot, but worked consistently later on.

I've worked around that by providing a temporary inithooks specific firstboot GitLab systemd unit file, so it starts early when required. The new service is then stopped once the config is complete. The "proper" GitLab service then starts as per usual, at it's allotted point within the boot process.

FWIW, it might make booting marginally quicker to adjust the default GitLab service file to consistently start earlier (it seems to run fine, even really early in the boot process), however as that is part of the Omnibus package, fiddling with that seems a bad idea...

I have been pushing all my updates to my repo. I make a habit of doing that every day after I've worked on it, so feel free to track my progress more closely via the commits log. As you can see, my commits are quite atomic, so it probably provides pretty good indication of what I've been up to... Once I am happy with it all, I'll rewrite/squash the commit history before merging into the TurnKey repo (lots of the commits are things I've tried then backed out of, or changed direction on - so the complete commit histroy is of limited value IMO).

FWIW I'm now continuing work on the TKLBAM config. Leveraging the GitLab backup mechanism makes the most sense really, especially considering that TKLBAM doesn't appear to see the GitLab Postgres installation. I have the basics working ok, but trying to get it all to work nicely within the limitations of GitLab backup and TKLBAM is quite complex really. Much more complex than I initially anticipated. But I am having some wins so far. I think that I'm also trying to do a bit too much with the backup really... But I just can't help myself! :)

I still need to get onto writing up some docs as there will be a lot of nuance to the TKLBAM requirements I suspect.

OnePressTech's picture

Sounds good Jed. I will be starting to play with it over the weekend. I look forward to seeing your handiwork up close :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

I was hoping to get a bit further today, but at this stage it looks unlikely...

Hopefully you'll get a chance to check it out over the weekend. Please let me know how it goes and any thoughts you have (good, bad or otherwise).

FWIW as the tklbam hook has grown (in size and complexity) I have been considering whether I should persevere with the bash hook I have, or whether it might be better to rewrite it in python?! If you have any thoughts at all on that (language and/or functionality and/or other ideas) please share.

Also, if you need more info about using the TKLBAM profile, please hit me up via text message or something. Otherwise, I'll be back onto it on Monday.

Jeremy Davis's picture

Ok I'm back into it and hoping to get the basics of the tklbam profile finished within the next day or 2. I have a few other apps that need updating so hopefully I can get GitLab into that batch. If I can't get there, I may have to hold off further.

Also one thing that I probably should note is how to use the tklbam-profile. On your gitlab server, this is how it's done:

cd /root
wget https://github.com/JedMeister/gitlab/raw/omnibus-pkg-install/turnkey-gitlab-15.3-stretch-amd64.tar.gz
mkdir tklbam-profile
tar xzvf turnkey-gitlab-15.3-stretch-amd64.tar.gz -C tklbam-profile

Then finally initialise TKLBAM with the profile (and your HUB_API_KEY):

tklbam-init $HUB_API_KEY --force-profile=tklbam-profile

Note too, that this should also work if you have GitLab installed via Omnibus (e.g. Core with Omnibus GitLab installed). Although you will also need the hook script (and conf file). This should do the trick:

path=etc/tklbam/hooks.d
files="gitlab gitlab.conf"
url=https://raw.githubusercontent.com/JedMeister/gitlab/omnibus-pkg-install/overlay
for file in $files; do
    wget $url/$path/$file -O /$path/$file
done

PS I just realised that there is a dumb mistake in the hook script. I've fixed it and pushed back to the repo.

Jeremy Davis's picture

I've really been pulling my hair out... TBH, I'm a bit stuck and I'm not really sure where to go from here...

It seems that restoring a GitLab backup (created by GitLab installed via Omnibus; restored to exactly the same version of GitLab, along with the config and secrets files as noted in the docs) causes GitLab to stop working...! :(

The exact issue is that all seems well until you try to log in. As the 'root' user, login appears to proceed, but then GitLab gives a 500 error.

I can reliably and consistently reproduce this error using backups of various different v10.x and v11.x versions. Googling returns tons of results, dating all the way back to v8.x (possibly beyond) which suggests that this issue is not new. There is a chance that I'm doing something that GitLab doesn't expect, but IMO, it should "just work". Anyway, here's the stacktrace when the error occurs:

==> /var/log/gitlab/gitlab-rails/production.log "✓", "authenticity_token"=>"[FILTERED]", "user"=>{"login"=>"root", "password"=>"[FILTERED]", "remember_me"=>"0"}}
Completed 500 Internal Server Error in 248ms (ActiveRecord: 23.1ms)
  
OpenSSL::Cipher::CipherError ():
  
lib/gitlab/crypto_helper.rb:27:in `aes256_gcm_decrypt'
app/models/concerns/token_authenticatable_strategies/encrypted.rb:55:in `get_token'
app/models/concerns/token_authenticatable_strategies/base.rb:27:in `ensure_token'
app/models/concerns/token_authenticatable_strategies/encrypted.rb:42:in `ensure_token'
app/models/concerns/token_authenticatable.rb:38:in `block in add_authentication_token_field'
app/services/application_settings/update_service.rb:18:in `execute'
lib/gitlab/metrics/instrumentation.rb:161:in `block in execute'
lib/gitlab/metrics/method_call.rb:36:in `measure'
lib/gitlab/metrics/instrumentation.rb:161:in `execute'
app/controllers/application_controller.rb:467:in `disable_usage_stats'
app/controllers/application_controller.rb:453:in `set_usage_stats_consent_flag'
lib/gitlab/middleware/rails_queue_duration.rb:24:in `call'
lib/gitlab/metrics/rack_middleware.rb:17:in `block in call'
lib/gitlab/metrics/transaction.rb:55:in `run'
lib/gitlab/metrics/rack_middleware.rb:17:in `call'
lib/gitlab/middleware/multipart.rb:103:in `call'
lib/gitlab/request_profiler/middleware.rb:16:in `call'
lib/gitlab/middleware/go.rb:20:in `call'
lib/gitlab/etag_caching/middleware.rb:13:in `call'
lib/gitlab/middleware/correlation_id.rb:16:in `block in call'
lib/gitlab/correlation_id.rb:15:in `use_id'
lib/gitlab/middleware/correlation_id.rb:15:in `call'
lib/gitlab/middleware/read_only/controller.rb:40:in `call'
lib/gitlab/middleware/read_only.rb:18:in `call'
lib/gitlab/middleware/basic_health_check.rb:25:in `call'
lib/gitlab/request_context.rb:20:in `call'
lib/gitlab/metrics/requests_rack_middleware.rb:29:in `call'
lib/gitlab/middleware/release_env.rb:13:in `call'
Started GET "/favicon.ico" for 127.0.0.1 at 2019-03-06 06:14:53 +0000

I've found a workaround (and confirmed that it actually works), but I'm not really clear on the larger implications of applying it. It seemed to have no adverse effect on my minimal test day set (a single user with a repo and a couple of issues), but it'd be great to be able to get others to test it, ideally on a decent dataset. Especially one that is configured to run task.

FWIW, here's the workaround:

cat > /tmp/fix.rb <<EOF
settings = ApplicationSetting.last
settings.update_column(:runners_registration_token_encrypted, nil)
EOF
chown git /tmp/fix.rb
gitlab-rails runner -e production /tmp/fix.rb && gitlab-ctl restart
rm /tmp/fix.rb

As you can probably guess from the code, it wipes out the encryption for the runner tokens, although what the full implications of that might be are unclear to me...

OnePressTech's picture

After restoring the data are you resetting the GitLab instance?

If not, try executing the following after a restore (in this order):

gitlab-ctl reconfigure
gitlab-ctl restart

 

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

FWIW the gitlab_restore function that I have constructed is within the TKLBAM hook script. To save you from decoding my bash, here is the essence of what it does:

gitlab-ctl reconfigure
gitlab-ctl stop unicorn
gitlab-ctl stop sidekiq
gitlab-rake gitlab:backup:restore BACKUP="relevant_backup"
gitlab-ctl restart
gitlab-rake gitlab:check SANITIZE=true

So it looks like I'm not actually re-running 'gitlab-ctl reconfigure' AFTER restoring the backup! I suspect that's the issue! Also TBH, I don't recall why I'm running 'gitlab-ctl reconfigure' BEFORE I run the restore?! Seems a bit redundant in retrospect...

Armed with your input and my reflection, I'll try tweaking the script a little and see how we go.

To be explicit, I'll try moving the 'gitlab-ctl reconfigure' from before the restore, to afterwards (between the restore step and the restart step as you suggest).

Also do you have any thoughts on the value of running the check? Perhaps it's not really of value there and just slows things down?

OnePressTech's picture

See 500 Error on login after restoring backup

The reason you may have added the gitlab-ctl reconfigure before the restore is that the GitLab omnibus restore pre-requisites include the following requirement: "You have run sudo gitlab-ctl reconfigure at least once." The reality is that you can't install the GitLab without doing gitlab-ctl reconfigure so I am not sure of the purpose of that requirement.

So the revised restore would be...

	gitlab-ctl stop unicorn 
	gitlab-ctl stop sidekiq 
	gitlab-ctl status 
	gitlab-rake gitlab:backup:restore BACKUP="relevant_backup" 
	gitlab-ctl restart 
	gitlab-ctl reconfigure 
	gitlab-rake gitlab:check SANITIZE=true fff 

NOTE: From the GitLab restore documentation

To restore a backup, you will also need to restore /etc/gitlab/gitlab-secrets.json (for Omnibus packages) or /home/git/gitlab/.secret (for installations from source). This file contains the database encryption key, CI/CD variables, and variables used for two-factor authentication. If you fail to restore this encryption key file along with the application data backup, users with two-factor authentication enabled and GitLab Runners will lose access to your GitLab server.

Nice progress Jed...getting there :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Awesome stuff. That sounds good. I'm really hoping moving that gitlab-ctl reconfigure does the trick (I suspect that it will). On reflection, I'm not really sure why that didn't occur to me previously (nor why any of the many threads I've read haven't double checked with users experiencing the issue). Anyway...

FWIW the whole /etc/gitlab dir (i.e. gitlab-secrets.json & gitlab.rb, plus any TLS certs that have been generated) are included in the backup by TKLBAM itself. All the rest of the GitLab directories (i.e. /opt/gitlab & /var/opt/gitlab) are explicitly excluded. The GitLab backup runs prior to TKLBAM doing anything. The file is then transferred out of the GitLab backup directory (/var/opt/gitlab/backups by default) to a location that is included in the backup (currently /var/cache/tklbam-gitlab). Then TKLBAM does it's thing...

I've also renamed the backup file (and stored the original name in a text file to support the restore process). That way, assuming that GitLab tars up the files in the same order, unless lots changes, the daily incremental backups should still be quite small. The rationale is that if the file is named as per GL naming convention, as far as TKLBAM is concerned, it will be a new file every day. Giving it a consistent name, makes TKLBAM realise that it's essentially the same file, but it's not exactly the same (so does a binary diff). TBH, I haven't actually double checked that my understanding is correct, which I should do. Because if it makes no difference, it's probably better to just move it, rather than renaming it too.

It has also occurred to me to untar the GitLab backup before doing the TKLBAM backup. TKLBAM already tars everything up prior to upload, so there is no increase in backup size by doing that (it may actually decrease the size of the backup?!) If we went that way, you could be assured that the daily incremental backup size would only increase directly relative to the new files, etc. OTOH, for many users it may not make much difference and it means an additional operation at backup time (untarring the GL backup) and restore time (tarring it back up so GL will recognise it). All that additional process, means additional opportunities for stuff to go wrong though, so I'm inclined to leave it be... (Simply rename it to a generic name).

As per always, interested in any input you (or anybody else) has.

Also, unfortunately I've been dragged away onto other stuff now. So it seems unlikely I'll be able to get back to this until next week... I think I'm really close now though! :)

OnePressTech's picture

The reason I added the reminder is that the script you and I listed above only includes:

gitlab-rake gitlab:backup:restore BACKUP="relevant_backup"

That restores everything BUT the  gitlab-secrets.json file.

FYI - there is a 500 error issue for missing  gitlab-secrets.json:

https://gitlab.com/gitlab-org/gitlab-ce/issues/49783

Thanks again for all your hard work...Very much appreciated :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

With a few explcit exclusions, the whole /etc directory (including /etc/gitlab) is included in the TKLBAM backup as part of the normal profile. So the gitlab-secrets.json file (and the config file, plus TLS certs) are all automagically restored by TKLBAM before any of the rest of this stuff happens. I appreciate the reminder though as I hadn't committed and pushed back the other required changes to the TKLBAM profile repo. So to avoid the risk of forgetting that, I've just done that now. :)

And actually, I wonder if re-running gitlab-ctl reconfigure with a different gitlab-secrets.json file (and then not re-running it after the restore) is perhaps part of the issue in the first place? TBH, I hadn't considered that before, but it actually seems plausible...

Anyway mate. Thanks again for your input. Hopefully I'll be able to tidy this up early next week. Cheers.

OnePressTech's picture

gitlab-rake gitlab:backup:restore BACKUP="relevant_backup"

Does not restore gitlab-secrets.json.

I know you are backing it up...what instruction is restoring it?

And yes...the gitlab-ctl reconfigure should be run AFTER the gitlab-secrets.json file is restored.

I expect that we're on the same page :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

I understand that neither gitlab-secrets.json nor the gitlab.rb config file, are included when gitlab-rake gitlab:backup:restore BACKUP="relevant_backup" is run.

But because TKLBAM already includes most of /etc (with some exclusions) and gitlab-secrets.json (and gitlab.rb) are stored in /etc/gitlab, they are automagically included in the normal TKLBAM backup. I.e. no additional command/inclusion/etc is required to include them.

That too may have been part of the issue with the 500 errors. The restore process that I've scripted runs post TKLBAM restore. So the backed up gitlab-secrets.json file has already been restored when the restore component of the Gitlab specific hook script runs. But then I was running gitlab-ctl reconfigure (i.e. original data, with restored gitlab-secrets.json). Then restoring the data and not re-running gitlab-ctl reconfigure.

Hopefully I should be able to get back to this today. Armed with this additional info, I'm really confident that it won't take much more work to get it going. Then it'll just require the docs to be written. That may take a little more time, but hopefully shouldn't be too bad.

OnePressTech's picture

Cheers Jed :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Ok so it all looks pretty good at this point. Thanks to your hints on the order that I was doing things during restore. I'm almost certain that was the cause of the 500 errors (hadn't run reconfigure post restore with matching secrets file in place). Admittedly it was a limited dataset I tested the backups with, but that part relies on the GitLab backup/restore mechanism, so I'm pretty confident that is good. And my backup from one test server restored nicely on my second (clean install) test server and everything appeared to be as it should.

So all I need to do is tidy up the code a little, rewrite the history (my repo is 51 commits ahead of master - which is probably a bit excessive...) and do some last minute testing. I've made a start on the doc page and hopefully it shouldn't take too long to get it finished.

If all things go as planned tomorrow, I'll be doing the build. Probably publish early next week. :)

There are still a few "nice to haves" that would be good to include, but I think at this point, I'll just add them to tracker and leave them for another day...

OnePressTech's picture

I'll do some testing on the weekend.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

FWIW, I've run out of time now today, but I have added the TLS (Let's Encrypt) cert stuff to Confconsole. I was going to leave that for now, but figured I may as well fix that little bit now...

I was hoping to get the build done, but didn't quite get there... :( Oh well. Monday it will be. If you get a chance to test that'd be great, but if you don't (or if you do and don't find any glaring issues) I'll aim to tidy up the commits and build it Monday, with a plan to publish ASAP. If there are bugs I've missed in the released version, I'll just fix it and re-release ASAP.

To reiterate the process of creating a TKLBAM backup using the profile in the repo and the hook script (with a minor update - the name of the tklbam profile file):

cd /root
wget https://github.com/JedMeister/gitlab/raw/omnibus-pkg-install/turnkey-gitlab-15.2-stretch-amd64.tar.gz
mkdir tklbam-profile
tar xzvf turnkey-gitlab-15.2-stretch-amd64.tar.gz -C tklbam-profile

Then finally initialise TKLBAM with the profile (and your HUB_API_KEY):

tklbam-init $HUB_API_KEY --force-profile=tklbam-profile

Note too, that this should also work if you have GitLab installed via Omnibus (e.g. Core with Omnibus GitLab installed). Although you will also need the hook script (and conf file). This should do the trick:

path=etc/tklbam/hooks.d
files="gitlab gitlab.conf"
url=https://raw.githubusercontent.com/JedMeister/gitlab/omnibus-pkg-install/overlay
for file in $files; do
    wget $url/$path/$file -O /$path/$file
done
OnePressTech's picture

I'll probably do a first pass on the weekend and then do another pass after your Monday / Tuesday update.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

I haven't yet announced it via the blog (today or tomorrow hopefully) but the shiny new Omnibus installed GitLab appliance is now available! Yay! :)

All the download links on the appliance page should be delivering "GitLab v15.2". It's available via the Hub now as well. I've uploaded the new AMI to AWS Marketplace (yesterday) too, but it usually takes them a little while to get it published (hopefully within the next week or so?).

As noted above while the work was in progress, it was a much bigger job than I initially anticipated. I ended up touching 34 files, with a total of 898 lines removed and 693 lines modified/added! It also turns out that using the Omnibus package for install results in a much smaller image too, which is an unexpected bonus. v15.2 is literally less than half the size (v15.1 ISO = ~1.6GB; v15.2 ISO = ~735MB)! TBH, I'm not sure why (perhaps because everything is pre-compiled?) and don't intend to bother finding out; but a great bonus none the less.

I would love some feedback when you get a chance. You should be able to manually migrate data to it via a normal GitLab Omnibus backup (obviously need to match the versions and transfer the files from /etc/gitlab too).

As previously noted, TKLBAM leverages the built-in GitLab backup mechanism and the TKLBAM hook scripts have "experimental" support for automagically attempting a version match on restore. I.e. it will attempt to downgrade/upgrade GitLab to match the GitLab backup version (won't work for a manual migration, but should work when restoring a TKLBAM GitLab backup to a v15.2+ GitLab appliance). It's disabled by default and relies on the required version of GitLab being available via the Omnibus package repos. My inclination is to leave that disabled by default into the future. But perhaps after some rigorous testing, I might add a note to the restore output that it's an option that can be enabled? We'll see...

OnePressTech's picture

Sorry I've been behind on my testing but I am finally going into test / deploy mode. Perfect timing buddy...much appreciated :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Thanks for trying it out, but as Tim notes GitLab is a bit of a beast, so has fairly high resource requirements (at least for a headless Linux server anyway).

Regardless, to double check, I just downloaded the ISO and launched it into a (KVM) VM with 2 vCPUs and 4GB RAM. And as it turns out, I can reproduce your 500 Error! Argh!

I'm pretty pissed TBH considering the time and energy that I put in and I did test the latest build code just before I built it (for release). I'm not 100% sure what has gone wrong, but clearly there is an issue!

FWIW it looks like it's something to do with the encryption keys (although I don't understand why). Here's how I worked that much out:

Open an SSH session and set the log to output to the session in real time:

tail -f /var/log/gitlab/gitlab-rails/production.log

Then I tried to log into the webUI, using username 'root' and the password I set at firstboot. This is what the log showed:

Started POST "/users/sign_in" for 127.0.0.1 at 2019-03-29 02:10:23 +0000
Processing by SessionsController#create as HTML
  Parameters: {"utf8"=>"✓", "authenticity_token"=>"[FILTERED]", "user"=>{"login"=>"root", "password"=>"[FILTERED]", "remember_me"=>"0"}}
Completed 500 Internal Server Error in 214ms (ActiveRecord: 19.6ms)
  
OpenSSL::Cipher::CipherError ():
  
lib/gitlab/crypto_helper.rb:27:in `aes256_gcm_decrypt'
app/models/concerns/token_authenticatable_strategies/encrypted.rb:55:in `get_token'
app/models/concerns/token_authenticatable_strategies/base.rb:27:in `ensure_token'
app/models/concerns/token_authenticatable_strategies/encrypted.rb:42:in `ensure_token'
app/models/concerns/token_authenticatable.rb:38:in `block in add_authentication_token_field'
app/services/application_settings/update_service.rb:18:in `execute'
lib/gitlab/metrics/instrumentation.rb:161:in `block in execute'
lib/gitlab/metrics/method_call.rb:36:in `measure'
lib/gitlab/metrics/instrumentation.rb:161:in `execute'
app/controllers/application_controller.rb:467:in `disable_usage_stats'
app/controllers/application_controller.rb:453:in `set_usage_stats_consent_flag'
lib/gitlab/middleware/rails_queue_duration.rb:24:in `call'
lib/gitlab/metrics/rack_middleware.rb:17:in `block in call'
lib/gitlab/metrics/transaction.rb:55:in `run'
lib/gitlab/metrics/rack_middleware.rb:17:in `call'
lib/gitlab/middleware/multipart.rb:103:in `call'
lib/gitlab/request_profiler/middleware.rb:16:in `call'
lib/gitlab/middleware/go.rb:20:in `call'
lib/gitlab/etag_caching/middleware.rb:13:in `call'
lib/gitlab/middleware/correlation_id.rb:16:in `block in call'
lib/gitlab/correlation_id.rb:15:in `use_id'
lib/gitlab/middleware/correlation_id.rb:15:in `call'
lib/gitlab/middleware/read_only/controller.rb:40:in `call'
lib/gitlab/middleware/read_only.rb:18:in `call'
lib/gitlab/middleware/basic_health_check.rb:25:in `call'
lib/gitlab/request_context.rb:20:in `call'
lib/gitlab/metrics/requests_rack_middleware.rb:29:in `call'
lib/gitlab/middleware/release_env.rb:13:in `call'

I'll work out an a workaround and post back ASAP. Then I guess, I'll have to rebuild it...!

Jeremy Davis's picture

I've opened a bug and intend to rebuild ASAP. Unfortunately, that won't be until next week, in the meantime you'll either need to make do with the old appliance (GitLab installed from source - not recommended) or apply the below workaround (recommended):

mv /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.bak
gitlab-rails runner -e production "ApplicationSetting.current.reset_runners_registration_token!"
/usr/lib/inithooks/bin/gitlab.py

You'll need to (re)set your password, email and domain. Everything should be good after that... Further feedback certainly welcome though.

I haven't yet updated the buildcode and tested from scratch, but I have tested the workaround (numerous times...) and feel fairly confident.

OnePressTech's picture

There are a lot of reasons for a 500 error with a GitLab installation. Check your logfiles and see what the problem is.

Remember GitLab is a bit of a resource hog so you need 4GB RAM minimum to run it. It is also recommended to have another 4GB swapfile (the swapfile is an optional requirement which you will not need until your site starts to grow bigger). See https://docs.gitlab.com/ce/install/requirements.html

Cheers,

Tim (Managing Director - OnePressTech)

OnePressTech's picture

I have been running GitLab on a M1.medium AWS instance for 3 years with no issues and this is a 1-core, 3.75GB RAM instance. I did need to add a 2GB swapfile once the GitLab developer group started to use the authomated CI infrastructure for all kinds of automation tasks (dev-related and dev-unrelated) and I did need to configure all the objects to be stored in AWS S3 rather than on the local disk to ensure the local disk did not get filled up.

Cheers,

Tim (Managing Director - OnePressTech)

OnePressTech's picture

Cheers,

Tim (Managing Director - OnePressTech)

OnePressTech's picture

Cheers,

Tim (Managing Director - OnePressTech)

OnePressTech's picture

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

I really appreciate the input.

I'm still pretty gutted as I tested extensively before releasing, and it was definitely working fine during my tests. I'm not really clear what happened between my tests and this failure...

The only thing I can think of is that GitLab must have released an update between my final test and the final build (literally within an hour or 2) - and that I didn't do a final test on the actual build prior to release.

Under normal circumstance, I download and test the build (that will be released) prior to release. I can only assume that I was so over it by then, that I didn't do it on this occasion. TBH, I don't recall, and would expect that I would have followed my "normal procedure" (to avoid issues such as this). But it seems like the only viable possibility... :(

OnePressTech's picture

Don't sweat it. First release always trips over something. Once I get my client switched over to the new gitLab VM you will have me and other GitLab aficionados to assist moving forward :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

You've likely already seen it Tim, but I've posted a workaround above.

In retrospect, the removal of the gitlab-secrets.json shouldn't really be required. However, I will need to do that within the firstboot scripts (to ensure that each instance of TurnKey GitLab has unique keys), so doesn't hurt to test that...!

The only thing is though, the rails console is quite slow, so I'm wondering if it might be better (or at least quicker) to adjust the DB? As noted in the docs this should work (when applied to the DB):

DELETE FROM ci_group_variables;
DELETE FROM ci_variables;

Do you have any thoughts or suggestions?

OnePressTech's picture

GitLab is a complex beast...they are constantly tweaking things to improve performance. I would think that a Rails-based solution would be safer than a DB-change option in the long-term. How slow is slow?

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

I never actually timed it, but it was a minute or 2 IIRC. Not really that long in the grand scheme of things, but feels like forever when you're sitting there waiting...

But you raise a good point. Let's stick with the Rails console...

OnePressTech's picture

Hi Jed,

You've done so much already I hate to impose, but I realised something when I was preparing to mirgate my customer and it is going to be an issue for everyone else at some point in their GitLab migrtion journey...version matching.

The safest migration process is to:

1) backup an existing GitLab on server1

2) create a TKLX GitLab instance WITH THE SAME VERSION on server2

3) Restore the backed-up server1 GitLab files on server2

The key here is WITH THE SAME VERSION.

Would it be possible to have the install script prompt for GitLab version to install?

All the GitLab omnibus installers are available to be installed so it should not affect the existing TKLX GitLab build process with the exception that the console would prompt for the veriosn to install BEFORE the installation process begins. Unless, of-course that turns a 1-step install process to a 2-step install process...we would need to install core first to get the console up...so the GitLab install would be a second install triggered via the console.

You can find all the omnibus installations at:

https://packages.gitlab.com/gitlab/gitlab-ce

Sorry to be a pain...you've been so great. If we could do this though with little extra effort it would make all the difference to me and everyone else who now, and in the future, may need to migrate between Debian versions where our current GitLab version is behind the latest version due to the Debian version being too old.

Context:

GitLab updates versions on the 22 day of each month. When an O/S reaches end-of-life GitLab won't update to a higher version. That means everyone only has 1 month to migrate a GitLab to a new server with an updated Debian before their TKLX GitLab becomes out-of-synch with their current version.

Thanks in advance...as always...much appreciated :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

It's disabled by default, but I did include version matching software to the TKLBAM hook script (for restoring a backup). However, it didn't occur to me that that might be valuable for users initially.

It's actually pretty easy to do. And seeing as I already have a fix to push, I'll see if I can include that in the next (bugfix release too). We'll see (it makes a lot of sense).

In the meantime, here's how to do it. Assuming that you want v10.0.6 (just picked a version at random), it's simply a case of this:

apt update
apt remove gitlab-ce
apt install gitlab-ce=10.0.6-ce.0

From what I can gather, the "version" string (i.e. "10.0.6-ce.0" in my example above) is:

<VERSION>-<RELEASE_CHANNEL>.<DEB_BUILD>
Where:
<VERSION&gt = 3 part version number; i.e. 10.0.6
<RELEASE_CHANNEL> - 'ce' (or 'ee')
<DEB_BUILD> - generally '0', but if they rebuild a package with the same GitLab version, this number is incremented

Re your comment:

GitLab updates versions on the 22 day of each month. When an O/S reaches end-of-life GitLab won't update to a higher version. That means everyone only has 1 month to migrate a GitLab to a new server with an updated Debian before their TKLX GitLab becomes out-of-synch with their current version.

Perhaps I'm missing some context, or misunderstand your statement, but are you sure about this? I note that both the Debian Jessie (i.e. base of v14.x - already well past "standard" EOL; but still maintained via "LTS" for about another year) and Debian Stretch (i.e. base of v15.x - current stable; "standard" EOL 1 year after Buster release, then "LTS" support until 5 years after Buster release) have omnibus packages for v11.9.1 (see Jessie here and Stretch here) which by my understanding is the latest GitLab release

OnePressTech's picture

GitLab supports older O/S versions for a while but only for security releases against the minor version.

https://docs.gitlab.com/omnibus/package-information/deprecated_os.html

Your suggestion about the remove and re-install is a good suggestion but I prefer a targeted build to a remove / re-install combo...you never know how clean a remove is.

Have a look at this post for a suggested set of GitLab uninstall steps:

https://forum.gitlab.com/t/complete-uninstall-gitlab-ce-from-ubuntu-14/6...

And thanks for the adjustment. I agree that auto-version-matching on a restoration should be experimental / opt-in due to the risks of unknown side-effects. A manual version selection on install though is a safe and useful feature.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

If you look down that page you linked to a little, under Supported Operating Systems both Debian 8 (Jessie - basis of v14.x) and Debian 9 (Stretch - basis of v15.x) are currently supported (Jessie until mid 2020 & Stretch until 2022). I'm not sure, but I think that the top table (with mention of "Raspbian Jessie" just above "Debian Wheezy") may have thrown you?

Regarding your request/suggestion, I'm potentially open to being a bit more savage in clearing out the pre-installed version on firstboot (if the user chooses to install a different version), but I'm not really keen on shipping with no version installed at all. Some users run servers offline (or at least without direct internet connection) so requiring an internet connection on firstboot doesn't seem like a good idea.

It's also worth noting that if a new/different version is installed clean, then without some tweaking, the firstboot password won't work and a new password would need to be set via the webUI. I would be inclined to leverage the tweak I do within the conf script to disable that, and set a password via the firstboot scripts (even if a new/different version is being installed).

Also the forum.gitlab.com link you provided doesn't work for me. When I first browsed there it said I needed to log in to view it. Then when I logged in, it said I didn't have adequate permissions?!

OnePressTech's picture

When you say " I'm not really keen on shipping with no version installed at all " I realise that you must be running GitLab omnibus during the image build process. I had thought that you were running omnibus from the image post build. So I thought that the image contained TKLX Core + scripts only. I guess you needed to run GitLab omnibus at build time so that TKLBAM had a golden image to delta from.

So probably the simplest solution is for me, with some guidance from you, to create a TKLDev build variant adjusted to build an image from an older GitLab omnibus. I can then migrate my old server data across and then run the upgrade-to-latest process manualy post imaging.

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

If you want to ensure that none of the original files remain (i.e. nuke the GitLab install) then use purge. I.e.:

apt purge gitlab-ce

That will remove all files/directories that the installation creates. If there are any files/directories which still have files in (that weren't part of the install - e.g. added later via user actions), then those directories will not be removed (just the files which were part of the install) and the uninstall scripts will show them (as warnings IIRC) as part of the apt output.

There is also a "gitlab-ctl cleanse" command which wipes out all GitLab related data. but without removing GitLab itself. I did try using that initially as part of the set up, but it wiped out the config file and the database, which is not what I wanted (just wanted secret regeneration). Although perhaps we can leverage that in this instance? Regardless, you may also find that command useful for cleaning things up?

OnePressTech's picture

I just realised that this is the only option for someone using a TKLX AWS image isn't it...correct me if I am wrong but isn't TKLDev limited to building non-AWS images?

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

First of all, the new appliance will be really good (once I apply the fix and include this feature) - I promise! So you should definitely use ours rather than making your own... ! :)

But to answer your question:

Technically buildtasks (included within TKLDev by default - since v14.1 or v14.2 IIRC) can be used to make your own AWS builds (like any of the other builds). However, I've never actually successfully used it to build AMIs with my personal AWS account.

Another user did try at one point and I provided a little assistance, but couldn't get it working. I didn't have the time or energy to investigate further at the time. I've always intended to circle back to that to that as soon as I have a "spare moment" but you know how common those are around here...! :)

It's also worth noting, that by default the bt-ec2 script will copy AMIs to all regions currently supported by the Hub. Chances are that isn't what you want, so you'll likely need/want to adjust it yourself anyway.

Unless you've played with TKLDev, it's worth noting that if you wish to develop a custom appliance, that will require you to build an ISO first (using 'bt-iso') then the other scripts process that ISO to build the other builds.

FWIW, buildtasks was published publicly around the time I started working with TurnKey (or soon after). It was initially published to provide greater transparency. But I included it in TKLDev some time ago as I felt that it still had value for end users wishing to produce their own appliances. Regardless, it is still primarily aimed at our internal usage, rather than being designed for end users.

OnePressTech's picture

So I'm thinking the following process to upgrade my server (Server1)...

1) Build a GitLab Server2 using lastest TKLX-Core and same GitLab version as Server1

2) GitLab backup from Server1 and restore to Server2

3) Upgrade Server2 to latest GitLab

4) Build TKLX GitLab (Server3) and upgrade to the latest GitLab version

5) Backup Server2 and restore to Server3

A bit torturous but this is probably the safest path to a clean outcome.

Thanks for everything you have done...don't worry about the hard work being lost...one way or the other I will end up on your shiny new TKLX GitLab :-)

Cheers,

Tim (Managing Director - OnePressTech)

OnePressTech's picture

What can I say...why change something that's working...until it's out of support of course :-)

Cheers,

Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Sorry I totally missed that.

FWIW Wheezy was EOL 2018-06-01 (Debian moved it from "LTS" to "archive") so as far as Debian are concerned, there haven't been any OS level updates since then. Having said that Freeaxian, have been providing an ELTS for Wheezy, so there probably have been OS level updates, at least for important stuff. And seeing as you are getting GitLab via Omnibus, most things would be being updated via that I guess...

Sytko's picture

When to wait for a new iso image? Error 500 when starting iso image fixed?

Jeremy Davis's picture

I was hoping to have a fixed ISO available already, but then I tried to also implement Tim's request re selecting the version to use at first boot. I decided to do that via python (with intention of making it somewhat generic so that it could be used elsewhere too). But I'm still relatively new to "proper" python programming (I've been using python for glue code for years, but it's not quite the same...). Needless to say, I got a bit bogged down with that and unfortunately, I've since been dragged aside to some other high priority tasks.

Unless I get a chance to get back to GitLab early next week, I probably should just rebuild it (with the fix for this particular issue) and get that published ASAP. The pressure to complete the version selection stuff will be off then and I can complete the code I've started when I get a chance. In the meantime, we can just document how to change versions. Actually GitLab have docs that pretty much cover that. Also FWIW my reading suggests that there is no issue with doing this on a Omnibus install with no data. However, it's worth noting, that unless you plan to restore a backup, if you it's not a good idea to downgrade GitLab version.. and include the version selection option in a future release.

In the meantime, you have a few options if you wish to persevere with GitLab on TurnKey:

  • Recommended: Use the new v15.2 GitLab and apply the issue fix/workaround (as noted both on the issue and in my previous post above)
  • Use the older v15.1 release (GitLab installed from source) - Not recommended
  • Alternatively, do what Tim has been doing for years - and install GitLab yourself (on Core probably).

As I note above, I recommend that you simply apply the fix. I'll repost it here so it's easier to find:

mv /etc/gitlab/gitlab-secrets.json /etc/gitlab/gitlab-secrets.json.bak
gitlab-rails runner -e production "ApplicationSetting.current.reset_runners_registration_token!"
/usr/lib/inithooks/bin/gitlab.py
Jeremy Davis's picture

Great work on that. Out of interest, how did you build your ISO? Did you use our buildtool/infrastructure TKLDev? If so, generally the best way to go is to open a pull request on GitHub for the relevant buildcode (obviously our GitLab repo in this case). Then we can rebuild the appliance with the updated buildcode and publish in all our usual formats.

If you used some alternate method to build your ISO, then that's fine, and your efforts are still appreciated, but we only distribute appliances that we build ourselves (from the relevant GitHub repo buildcode. That's in place because that's the only way we can be assured that it complies with our standards. So you'd need to share your file via some sort of filesharing app, such as DropBox, Google Drive or one of the many online filesharing apps.

Also as it turns out in this case, it looks like I beat you to it. Whilst I haven't actually written up a relevant "release" blog post yet, I have uploaded our updated (and fixed) GitLab appliance (v15.3 - downloadable from the appliance page) and ours also includes GitLab v11.10.4. I'm sorry that I wasn't a bit quicker with it (both building it and the announcement) but at least you know how it works now for future reference. Please feel free to give it a spin and share your feedback.

Thanks again for your contribution. And sorry that I wasn't a bit more public on what I was up to...

Jeremy Davis's picture

What? Even the 15.3 one?! I tested it quite extensively and it was working fine when I published it?!

I'll have another look at it myself ASAP.

And/or please feel free to share your code that is working. The build code repo is here.

Jeremy Davis's picture

I just double checked and unfortunately, you are correct...

I'm not at all clear why we didn't encounter the issue when testing, but if I reapply the fix we had previously documented (reposted below), it still works. So I'm not really clear what is going on there, definitely requires more testing and another release though. Bugger...

Here's the fix that resolves it:

gitlab-rails runner -e production "ApplicationSetting.current.reset_runners_registration_token!"

Thanks again for reporting. Whilst it's a pain to have to fix it again, I'd much rather be aware...

Alan Farkas's picture

First can I say, how Helpful it would have been for a note/link to have been posted on the main download page for this VM. I've literally spent days on this......

On a fresh install of the Turnkey Gitlab v15.3 VM on both WM Workstation 15 and ESX 5.5, and I consistently get a 500 (Whoops, something went wrong at our end)- error under the following situations:

1) Attempting to login into GitLab with the root ID. Somehow, though, once I create an additional ID, I'm then able to successfully login with the root ID.

2) Attempting to saving any Application settings in the Admin Area.

 

I did a little digging and discovered in the log, some "OpenSSL CipherError" messages.

 

I tried a bunch of things to resolve, re-installing, running "gitlab-rake" to check the configuration (which was fine), but nothing worked.

 

I eventually came across this fix (for migration, and backup/restore) issues that presented themselves the same as my error:  https://gitlab.com/gitlab-org/gitlab-ce/issues/56403

gitlab-rails c

settings = Application.last

settings.update_column(:runners_registration_token_encrypted, nil)

exit

gitlab-ctl restart

 

 

Jeremy Davis's picture

Sorry to hear that you wasted so much time on this... :( We have a fixed image in the pipeline, but it's not yet been published.

Re your note/request for "a note/link to have been posted on the main download page", actually there is, but it seems that it wasn't obvious/clear enough?! FWIW on each appliance page (e.g. the GitLab page), under the heading "GitHub", there is a text link labelled "Issues" (which in the case of GitLab links here). If you follow that link, the GitLab issue you are referring to is the top issue listed there (i.e. this issue). You'll find a fix/workaround documented there (although clearly you no longer need it...).

We do it that way as we have ~100 different appliances and already have an issue tracker to track any bugs/issues, so having an "automated" link means that we can spend less time explicitly updating appliance page text, and more time actually resolving these sort of issues and pushing out updates. Although in this case the updated/fixed GitLab release lag has been far from ideal...

Do you have any suggestion on how we might be able to make that link clearer to users such as yourself? Had you noticed that, it would have certainly been a much nicer experience for you and saved you a ton of time.

I guess the other thing worthy of note is that it may be a little confusing for the uninitiated because that issue is closed (we close issues once the code that resolves it has been committed, there can sometimes be a lag of a week or 2 for the publishing process).

Anyway, I'm really glad to hear that you managed to work it out and got up and running eventually. I look forward to your suggestion on making the Issues link more noticiable (without dominating the info).

Alan Farkas's picture

So, now I feel stupid :) I've looked on that download page at least a dozen times, and I kept missing the "issues" link. Now that I know it's there, I'll know to look at it from now on.

I get your point about the work, overhead, maintenance, etc. involved in maintating specialized notes for each appliance. I also understand that amount of work that goes into maintaining these appliances and that everyone involved probably has regular day jobs and our doing this as a "labor of love".

Perhaps if the issues link was on it's own line with a somewhat bigger font, my eyes wouln't have glossed over it.  Since this 500 issue was technical a "blocker", perhaps a link entitled something like "There are some known issues - please read these notes before installing this appliance". As someone who maintains a commericial / open source application, I have done something similar for my own clients.

Thanks for all your work on building this appliance.

 

Jeremy Davis's picture

I just had a quick look at the website code to see if I could easily adjust it and it looks like those links are generated with some PHP trickery and I don't want to risk breaking anything (I have a passable understanding of PHP, but it isn't really my forte).

However, to ensure that it doesn't get forgotten, I've opened an issue on our tracker. I'm not sure when we'll get to it, but we need to do some website maintenance soon (there's a few outstanding bugs that need to be resolved). So hopefully we'll do something with that when we get the maintenance done.

FWIW "show stopper" issues such as this GitLab "500 error" usually get very high priority. But as I've ploughed so much time and energy into this rewrite of the GitLab appliance build, I got a little tired of it and haven't moved as quickly as I ideally should have. Thanks for your patience and understanding.

Please feel free to share any other feedback and suggestions you might have for us. I can't promise that we can follow all your ideas (and even if we do, I can't guarantee timely implementation) but we certainly love to hear user thoughts and ideas. Worst case, we'll add them to the tracker and aim to get to them as soon as we can. :)

Alan Farkas's picture

I can't figure out how to update the "root@gitlab" email address that's attached to the nightly CRON-APT job. I created a ".forward" file, with an alternate email address, in the "root" folder. But, the CRON-APT emails are still going to "root@gitlab".

 

Jeremy Davis's picture

First up, it's likely worth noting that this won't be specific to the GitLab appliance. The default Postfix (the mail transfer agent (MTA) that we preinstall) config is common to all TurnKey appliances. It's perhaps also worth further noting that by default on the GitLab appliance, GitLab should be configured to send emails via Postfix (on localhost). So if you haven't adjusted the GitLab email settings and mails from GitLab are working, then the rest of this post is possibly irrelevant or wrong.

Anyway, I'm not 100% sure, but by my understanding, the emails will still go to "root@gitlab" (i.e. root@localhost), but should also be forwarded to the email you have noted in the /root/.forward file.

I suspect that if mail is not being forwarded to the email address in your forwards file, then it's about email deliverability rather than anything else.

Depending on where your server is running, Postfix may be being blocked completely (e.g. if you are self hosting via a normal consumer ISP plan). Alternatively the IP address of your server may have been blacklisted on one of the many Spam blacklists sites. That is quite common for the IP addresses provided by Cloud/VPS companies (because spammers often abuse these services).

As a general rule, the easiest way to work around that is to configure Postfix to send emails via an SMTP relay (rather than directly). We have a Confconsole plugin which provides and easy way to configure an SMTP Mail relay.

If you've already configured GitLab emails, this is likely irrelevant, but FWIW by default GitLab is configured to send emails via Postfix on localhost. So setting up the SMTP relay for Postfix should also allow GitLab email sending via the same SMTP relay (without need to configure GitLab email sending separately).

Finally, if you are sure everything is right and the emails aren't sending as expected, it's likely worth checking the Postfix logs. They should be found within /var/log and IIRC are called mail.log and mail.err (but I could be wrong...).

Jeremy Davis's picture

Hi all. Apologies on the delays, but I can happily now let you know that the (fully functioning OOTB) v15.4 TurnKey GitLab appliance (with GitLab installed from the Omnibus package) is now available (published late last week - noted within this blog post).

Apologies on the time that this has taken to finally get a fully working appliance published, but hopefully we're there now!

Add new comment