Bjørn Otto Vasbotten's picture

Question one:

Should I be messing with upgrading Gitlab myself, or should i just wait for TKL to do this for me? 


And if the answer to that is yes, then how do i go about upgrading...
I have some problems following the instructions posted on the gitlab site:

Question two:

Should i follow an incremental upgrade path as described above, or should i go straight from 2.5 to 2.8?


Question three:

What is the best way of getting the upgrade down with git, do I run the git clone command directly on the folder that gitlab is installed in?

Adrian Moya's picture


1. Upgrade gitlab yourself. I don't think the TKL guys are going to keep the up with new versions of the gitlab, as they come out too quickly. And it's a good thing to keep gitlab upgraded.

2. You have to follow the incremental path. I'm not sure which version finally did it to the TKLAppliance, but you'll have to upgrade from there on.

3. You shouldn't run a clone command, instead you should pull, but I can't help you with this as I don't have the appliance running. Why don't you post the specific problems you're getting with the official upgrade instructions to see if I can help you a bit more?

Mohd Najib Bin Ibrahim's picture

Hi, I am new to git and gitlab. Currently using turnkey-gitlab. The gitlab version is 2.5.0. There is no '.git' directory in '/home/gitlab/gitlab/' should I start with 'git remote add origin ...'? If yes, what is the correct command?

Jeremy Davis's picture

And it's close (I think) but I can't be sure.

I am using the instructions here:
as I figure that it probably the most reasonable way to go)

Ok so here is what I have which seems to work ok:

	service redis-server stop
service nginx stop
cd /home/gitlab/gitlab
sudo -u gitlab git init
sudo -u gitlab git remote add origin git://
sudo -u gitlab git pull origin stable
sudo -u gitlab gem update --system
sudo -u gitlab cp config/gitlab.yml config/gitlab.yml.orig
sudo -u gitlab cp config/gitlab.yml.example config/gitlab.yml

But it all falls over when I run

(important bits in bold)

sudo -u gitlab bundle install --without development test
Fetching gem metadata from
Error Bundler::HTTPError during request to dependency API
Fetching full source index from
remote: Counting objects: 5135, done.
remote: Compressing objects: 100% (2802/2802), done.
remote: Total 5135 (delta 2453), reused 4878 (delta 2250)
Receiving objects: 100% (5135/5135), 1.99 MiB | 212 KiB/s, done.
Resolving deltas: 100% (2453/2453), done.
remote: Counting objects: 149, done.
remote: Compressing objects: 100% (85/85), done.
remote: Total 149 (delta 59), reused 134 (delta 50)
Receiving objects: 100% (149/149), 26.44 KiB | 14 KiB/s, done.
Resolving deltas: 100% (59/59), done.
remote: Counting objects: 154, done.
remote: Compressing objects: 100% (93/93), done.
remote: Total 154 (delta 51), reused 132 (delta 32)
Receiving objects: 100% (154/154), 76.00 KiB | 15 KiB/s, done.
Resolving deltas: 100% (51/51), done.
remote: Counting objects: 333, done.
remote: Compressing objects: 100% (144/144), done.
remote: Total 333 (delta 193), reused 296 (delta 161)
Receiving objects: 100% (333/333), 44.06 KiB | 20 KiB/s, done.
Resolving deltas: 100% (193/193), done.
Using rake ( 
Installing i18n (0.6.1) 
Installing multi_json (1.3.6) 
Installing activesupport (3.2.8) 
Installing builder (3.0.2) 
Installing activemodel (3.2.8) 
Using erubis (2.7.0) 
Installing journey (1.0.4) 
Using rack (1.4.1) 
Using rack-cache (1.2) 
Using rack-test (0.6.1) 
Using hike (1.2.1) 
Using tilt (1.3.3) 
Using sprockets (2.1.3) 
Installing actionpack (3.2.8) 
Installing mime-types (1.19) 
Using polyglot (0.3.3) 
Using treetop (1.4.10) 
Using mail (2.4.4) 
Installing actionmailer (3.2.8) 
Using arel (3.0.2) 
Using tzinfo (0.3.33) 
Installing activerecord (3.2.8) 
Installing activeresource (3.2.8) 
Using bundler (1.1.4) 
Using rack-ssl (1.3.2) 
Installing json (1.7.5) with native extensions 
Using rdoc (3.12) 
Installing thor (0.16.0) 
Installing railties (3.2.8) 
Installing rails (3.2.8) 
Installing acts-as-taggable-on (2.3.1) 
Using bcrypt-ruby (3.0.1) 
Using blankslate ( 
Installing bootstrap-sass ( 
Using carrierwave (0.6.2) 
Using charlock_holmes (0.6.8) 
Installing chosen-rails ( 
Installing coffee-script-source (1.3.3) 
Installing execjs (1.4.0) 
Using coffee-script (2.2.0) 
Using coffee-rails (3.2.2) 
Using colored (1.2) 
Using daemons (1.1.8) 
Installing orm_adapter (0.3.0) 
Installing warden (1.2.1) 
Installing devise (2.1.2) 
Using diff-lcs (1.1.3) 
Installing draper (0.17.0) 
Using escape_utils (0.2.4) 
Using eventmachine (0.12.10) 
Installing multipart-post (1.1.5) 
Installing faraday (0.8.4) 
Installing ffaker (1.14.0) 
Installing sass (3.1.19) 
Installing sass-rails (3.2.5) 
Installing font-awesome-sass-rails ( 
Installing foreman (0.47.0) 
Installing gemoji (1.1.1) 
Using git (1.2.5) 
Using posix-spawn (0.3.6) 
Installing yajl-ruby (1.1.0) with native extensions 
Installing pygments.rb (0.3.1) 
Installing github-linguist (2.3.4) 
Installing github-markup (0.7.4) 
Installing gitlab_meta (3.0) 
Installing gratr19 ( 
Using grit (2.5.0) from (at 7f35cb9) 
Installing hashery (1.5.0) 
Installing gitolite (1.1.0) 
Using grack (1.0.0) from (at master) 
Using hashie (1.2.0) 
Using multi_xml (0.5.1) 
Installing rack-mount (0.8.3) 
Installing grape (0.2.1) 
Installing haml (3.1.6) 
Using haml-rails (0.3.4) 
Using httparty (0.8.3) 
Installing httpauth (0.1) 
Installing jquery-atwho-rails (0.1.6) 
Using jquery-rails (2.0.2) 
Installing jquery-ui-rails (0.5.0) 
Installing jwt (0.1.5) 
Installing kaminari (0.14.0) 
Using kgio (2.7.4) 
Using libv8 ( 
Installing modernizr (2.5.3) 
Using mysql2 (0.3.11) 
Using net-ldap (0.2.2) 
Installing oauth (0.4.7) 
Installing oauth2 (0.8.0) 
Using omniauth (1.1.0) 
Installing omniauth-oauth2 (1.1.0) 
Installing omniauth-github (1.0.3) 
Installing omniauth-google-oauth2 (0.1.13) 
Using pyu-ruby-sasl ( 
Using rubyntlm (0.1.1) 
Using omniauth-ldap (1.0.2) from (at f038dd8) 
Installing omniauth-oauth (1.0.1) 
Installing omniauth-twitter (0.0.13) 
Installing pg (0.14.0) with native extensions 
Gem::Installer::ExtensionBuildError: ERROR: Failed to build gem native extension.

        /usr/local/bin/ruby extconf.rb 
checking for pg_config... no
No pg_config... trying anyway. If building fails, please try again with
checking for libpq-fe.h... no
Can't find the 'libpq-fe.h header
*** extconf.rb failed ***
Could not create Makefile due to some reason, probably lack of
necessary libraries and/or headers.  Check the mkmf.log file for more
details.  You may need configuration options.

provided configuration options:

Gem files will remain installed in /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/pg-0.14.0 for inspection.
Results logged to /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/pg-0.14.0/ext/gem_make.out
An error occured while installing pg (0.14.0), and Bundler cannot continue.
Make sure that `gem install pg -v '0.14.0'` succeeds before bundling.

I tried

sudo -u gitlab gem install pg -v '0.14.0' --without-pg


sudo -u gitlab gem install pg -v '0.14.0' --without-pg_config

in the hope that it wasn't really needed (from what I gather the pg gem is to do with PostgreSQL - and the TKL appliance is built with MySQL backend from what I can gather... But they both return "invalid option" errors...

Interestingly enough though, I started playing with this upgrade from another instance, documenting as I went and got past this (but ran into troubles later so I started again). Not sure what that's about...? I'll try again another time...

[update] I think there is a bug and I just had poor timing... The repo has been updated (to v3.0.1) and I think the first time (when it got past this bit) was back with v2.9. I have just posted on the GitLab mailing list and we'll wait to see what happens...

And i seems like the issue I was having before (on my original server) was to do with the SSH key. I just read about it on the GitLab mailinglist.

Andrew Stewart's picture

I've been having similar trouble upgrading the GitLab 2.5 distro from TKL.  It's frustrating because the TKL distro is really convenient for setting up but upgrading seems impossible.

Jeremy Davis's picture

When I get this sorted I'll post the lot together as a clean post - or perhaps in the wiki or maybe as a TKLPatch 

So I posted on the GitLab mailing list and found out that things have changed a little...

Instead of 

sudo -u gitlab bundle install --without development test

you need to run

sudo -u gitlab bundle install --without development test postgres sqlite

which has got me further, but still not quite there, now I am stuck at the final step

sudo -u gitlab bundle exec rake gitlab:app:status RAILS_ENV=production
Starting diagnostics
/home/git/repositories/ is writable?............YES 
remote: Counting objects: 24, done. 
remote: Compressing objects: 100% (17/17), done. 
remote: Total 24 (delta 2), reused 0 (delta 0) 
Receiving objects: 100% (24/24), done. 
Resolving deltas: 100% (2/2), done. 
Can clone gitolite-admin?............YES 
UMASK for .gitolite.rc is 0007? ............YES 
/home/git/.gitolite/hooks/common/post-receive exists? ............NO 

rake aborted! unexpected return Tasks: TOP => gitlab:app:status 
(See full trace by running task with --trace)

But it's definately there (I copied it there and chowned it to git:git as per instructions)

ls -la /home/git/.gitolite/hooks/common/

total 32
drwxr-xr-x 2 git git 4096 Oct 23 21:03 .
drwxr-xr-x 4 git git 4096 Aug 11 14:54 ..
-rw-r--r-- 1 git git    0 Aug 11 14:54 gitolite-hooked
-rw-r--r-- 1 git git  288 Aug 11 14:54 gl-pre-git.hub-sample
-rwxr-xr-x 1 git git  471 Oct 23 21:03 post-receive
-rwxr-xr-x 1 git git  825 Aug 11 14:54 post-receive.mirrorpush
-rwxr-xr-x 1 git git 4348 Aug 11 14:54 update
-rw-r--r-- 1 git git 1405 Aug 11 14:54 update.secondary.sample

I have posted again on the GitLab mailing list but if anyone here has any bright ideas, I'd love to hear them....

[update] I have fixed this issue too now (it is to do with the permissions). Unfortunately I don't have my notes handy, but it's pretty straight forward. Unfortunately it still isn't working... :( It passes all the tests but the GitLab server still refuses to start... I think I'll start again on a fresh TKL GitLab appliance when I get a chance and see how we go from there...

Bjørn Otto Vasbotten's picture

Just wanted to say thanks to Jeremy for your efforts on this issue, much appreciated. :)

Jeremy Davis's picture

Just wish that I had something better to report back... I tried again with a fresh GitLab instance (number 5...!) and can now complete all the steps right through to the last - a successful result from "sudo -u gitlab bundle exec rake gitlab:app:status RAILS_ENV=production" but unfortunately GitLab still won't start. :(

The error I get is:

Starting Gitlab service: master failed to start, check stderr log for details

The contents of log/unicorn.stderr.log are:

I, [2012-10-24T12:25:51.925647 #4279]  INFO -- : listening on addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket fd=5
I, [2012-10-24T12:25:51.937718 #4279]  INFO -- : Refreshing Gem list
I, [2012-10-24T12:26:15.584286 #4279]  INFO -- : master process ready
I, [2012-10-24T12:26:15.661607 #4321]  INFO -- : worker=0 ready
I, [2012-10-24T12:26:15.677125 #4324]  INFO -- : worker=1 ready
E, [2012-10-24T21:33:18.737564 #10926] ERROR -- : adding listener failed addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket (in use)
E, [2012-10-24T21:33:18.737796 #10926] ERROR -- : retrying in 0.5 seconds (4 tries left)
E, [2012-10-24T21:33:19.238471 #10926] ERROR -- : adding listener failed addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket (in use)
E, [2012-10-24T21:33:19.238562 #10926] ERROR -- : retrying in 0.5 seconds (3 tries left)
E, [2012-10-24T21:33:19.738984 #10926] ERROR -- : adding listener failed addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket (in use)
E, [2012-10-24T21:33:19.739079 #10926] ERROR -- : retrying in 0.5 seconds (2 tries left)
E, [2012-10-24T21:33:20.239533 #10926] ERROR -- : adding listener failed addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket (in use)
E, [2012-10-24T21:33:20.241067 #10926] ERROR -- : retrying in 0.5 seconds (1 tries left)
E, [2012-10-24T21:33:20.741482 #10926] ERROR -- : adding listener failed addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket (in use)
E, [2012-10-24T21:33:20.741582 #10926] ERROR -- : retrying in 0.5 seconds (0 tries left)
E, [2012-10-24T21:33:21.241942 #10926] ERROR -- : adding listener failed addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket (in use)
/home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/socket_helper.rb:140:in `initialize': Address already in use - /home/gitlab/gitlab//tmp/sockets/gitlab.socket (Errno::EADDRINUSE)
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/socket_helper.rb:140:in `new'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/socket_helper.rb:140:in `bind_listen'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:224:in `listen'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:741:in `block in inherit_listeners!'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:741:in `each'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:741:in `inherit_listeners!'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:123:in `start'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/bin/unicorn_rails:209:in `'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/bin/unicorn_rails:23:in `load'
	from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/bin/unicorn_rails:23:in `'

I haven't had a really close look and so perhaps I'm missing something obvious, but I have no idea what the error is about (I know nothing about Ruby...) and I've had no response so far from my post on the GitLab mailing list. If I get time I may try one more time and if I get the same result and get no reponse from the mailing list, then I think I will try posting it as an 'issue' against GitLab on the GitLab GitHub...

Anyway here is the code that I am using:

service gitlab stop
cd /home/gitlab/gitlab
sudo -u gitlab git init
sudo -u gitlab git remote add origin git://
sudo -u gitlab git pull origin stable
sudo -u gitlab cp config/gitlab.yml config/gitlab.yml.orig
sudo -u gitlab cp config/gitlab.yml.example config/gitlab.yml
sudo -u gitlab bundle install --without development test postgres sqlite
sudo -u gitlab bundle exec rake db:migrate RAILS_ENV=production
cp lib/hooks/post-receive /home/git/.gitolite/hooks/common/post-receive
chown git:git /home/git/.gitolite/hooks/common/post-receive
chmod g+rwx /home/git/.gitolite
sudo -u git -H sed -i "s/\(GIT_CONFIG_KEYS\s*=>*\s*\).\{2\}/'\.\*'/g" /home/git/.gitolite.rc
sudo -u gitlab bundle exec rake gitlab:app:status RAILS_ENV=production

If anyone else has any bright ideas I'd love to hear...

[update] Just updated the code & comments a little. Nothing major, just realised that the redis-server doesn't need to be stopped, just gitlab...

Andrew Stewart's picture

Here's cheering you on. I'll buy you a beer with paypal if you get it! :D

On Oct 24, 2012, at 6:00 PM, TurnKey Linux <> wrote:

Nauman's picture

Found that it wasn't able to start resque. You can try running resque and it is missing some additional config in gitlab.yml


which I found it over here



  host: localhost
  port: 80
  https: false
  default_projects_limit: 10
  # backup_path: "/vol/backups"   # default: Rails.root + backups/
  # backup_keep_time: 604800      # default: 0 (forever) (in seconds)
  # disable_gravatar: true        # default: false - Disable user avatars from
Jeremy Davis's picture

Thanks for the input.

'./' returns the same error for me as 'service gitlab start' except it mentions that a trace can be done for full output. I tried that but not sure what the go is there, it ran exactly the same (maybe it saves to a file somewhere cause there was no on screen trace).

Also I double checked the config file but the updated format seems to be covered in my code with the line:

sudo -u gitlab cp config/gitlab.yml.example config/gitlab.yml

Jeremy Davis's picture

I just saw that GitLab has been updated to 3.0.3 so I tried again to update this appliance from a clean install (#6)... Still no joy though... :(

I just tweaked the code I am using to do this update in the above forum post. It is so frustrating that this doesn't work cause I can't figure out why. Also FYI after reading the above error I posted it seemed that the socket was the problem, with naive optimism I deleted the .socket in #5 but still errors.

The errors I get now (in #6) are:

cat log/unicorn.stderr.log 
I, [2012-10-28T03:04:46.430091 #4095]  INFO -- : listening on addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket fd=5
I, [2012-10-28T03:04:46.430805 #4095]  INFO -- : Refreshing Gem list
I, [2012-10-28T03:05:08.422747 #4095]  INFO -- : master process ready
I, [2012-10-28T03:05:08.479913 #4135]  INFO -- : worker=0 ready
I, [2012-10-28T03:05:08.495342 #4138]  INFO -- : worker=1 ready
I, [2012-10-28T03:11:11.930897 #4095]  INFO -- : reaped # worker=0
I, [2012-10-28T03:11:11.931046 #4095]  INFO -- : reaped # worker=1
I, [2012-10-28T03:11:11.931136 #4095]  INFO -- : master complete
I, [2012-10-28T03:46:36.162472 #6040]  INFO -- : unlinking existing socket=/home/gitlab/gitlab//tmp/sockets/gitlab.socket
I, [2012-10-28T03:46:36.192208 #6040]  INFO -- : listening on addr=/home/gitlab/gitlab//tmp/sockets/gitlab.socket fd=5
I, [2012-10-28T03:46:36.192752 #6040]  INFO -- : Refreshing Gem list
/home/gitlab/gitlab/app/models/event/push_trait.rb:2:in `': undefined method `as_trait' for Event::PushTrait:Module (NoMethodError)
from /home/gitlab/gitlab/app/models/event/push_trait.rb:1:in `'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.8/lib/active_support/dependencies.rb:251:in `require'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.8/lib/active_support/dependencies.rb:251:in `block in require'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.8/lib/active_support/dependencies.rb:236:in `load_dependency'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.8/lib/active_support/dependencies.rb:251:in `require'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.8/lib/active_support/dependencies.rb:359:in `require_or_load'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.8/lib/active_support/dependencies.rb:313:in `depend_on'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/activesupport-3.2.8/lib/active_support/dependencies.rb:225:in `require_dependency'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/engine.rb:439:in `block (2 levels) in eager_load!'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/engine.rb:438:in `each'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/engine.rb:438:in `block in eager_load!'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/engine.rb:436:in `each'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/engine.rb:436:in `eager_load!'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/application/finisher.rb:53:in `block in '
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/initializable.rb:30:in `instance_exec'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/initializable.rb:30:in `run'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/initializable.rb:55:in `block in run_initializers'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/initializable.rb:54:in `each'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/initializable.rb:54:in `run_initializers'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/application.rb:136:in `initialize!'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/railties-3.2.8/lib/rails/railtie/configurable.rb:30:in `method_missing'
from /home/gitlab/gitlab/config/environment.rb:5:in `'
from `require'
from `block in '
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/builder.rb:51:in `instance_eval'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/rack-1.4.1/lib/rack/builder.rb:51:in `initialize'
from `new'
from `'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn.rb:44:in `eval'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn.rb:44:in `block in builder'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/bin/unicorn_rails:139:in `call'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/bin/unicorn_rails:139:in `block in rails_builder'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:696:in `call'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:696:in `build_app!'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/lib/unicorn/http_server.rb:136:in `start'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/gems/unicorn-4.3.1/bin/unicorn_rails:209:in `'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/bin/unicorn_rails:23:in `load'
from /home/gitlab/gitlab/vendor/bundle/ruby/1.9.1/bin/unicorn_rails:23:in `'

So still no idea what is going on...

Benedikt Spellmeyer's picture

Seems like the modularity gem is missing. I got one step further by just adding it to the Gemfile:

gem "modularity"

Remember to update your gems:

bundle update

Jeremy Davis's picture

Another way to go may be to use the unofficial GitLab Debian repo. See here.

Obviously it is not signed, nor is a known quantity (ie I have no idea who hosts it and/or how legit they are) but I assume (perhaps naively?) that it should be ok...!?

It seems to get regularly updated too which may be a handy thing...

Only thing is whether it may be best to try to install to the GitLab appliance, or to install to Core... Either way TKLBAM will need a bit of tweaking to make it work right (to make sure all the repos etc are backed up).

Bjørn Otto Vasbotten's picture

So we are still at a standstill on this whole upgrade issue?

Just one thing i noticed, that GitlabHq offers paid support on upgrading Gitlab.

I could be interested in helping out with a few buck$ if anyone else was interested on chipping in on this to help make the TKL Gitlab appliance upgradable.

Jeremy Davis's picture

I've hit a wall with the upgrade. I have no idea why GitLab won't start, have recieved no response from my post on the GitLab mailing list and have no time currently to play more.

One thing that has occurred to me is that perhaps an incremental path may be the way to go. The benefit of that is that it can be built on. I have an idea of how we could do that but as I say I just don't have any spare time for it at the moment.

Whilst it would be nice to have the appliance upgradable, for the moment I am using it inhouse (ie only accessable via LAN) so I don't really need it updated (but it would be nice) so beyond investing time when I have it I am not (yet) willing to invest cash. If we were then perhaps Adrian (Moya) may be another option. I know that he is not involved lots here at TKL these days but he has been a wonderful assistance in the past and may be willing to help out the TKL community again?

Adrian Moya's picture

Hi guys, sorry for not helping right away, I'm just full right now, but I talked last week to Alon about this thread. He commented that Gitlab is being installed from a tarball, so it won't be upgradable like it is. I think he knows that this may not have been the best way to install for the appliance, considering the awesome rhythm of releases the gitlab team is able to produce. 

Anyway, I would need time to play with this which I don't have this week but I think I'll be able next week to take a look. Meanwhile, you can take a look at my original TKLPatch for this appliance (used as a base for the official one):

I hope it helps, if not download tkl-core and apply the patch. Once finished, the result should be upgradable using the official docs. If it goes well, I'll see if I can convince the TKL guys to change this particular appliance.

Jeremy Davis's picture

Thanks for your input mate.

I assumed that seeing as the tarball would've been simply a snapshot of the stable repo; that creating it as a git repo (git init) and adding the gitlab repo as the origin that it would in effect be the same as installing straight from git... But perhaps not?

If I have time I'll test you idea out with your patch.

Bjørn Otto Vasbotten's picture

As always, many thanks for your efforts.

So it seems that it will remain unlikely that we will be able to upgrade our existing instance of TKL Gitlab from 2.5, and we will probably end up with either a new TKL Core instance with Adrians patch, or a future release of TKL Gitlab.

It is something of a nuisance if we will lose the stuff we have stored in Gitlab so far, would it be difficult to transfer the data from our existing installation of Gitlab 2.5 to a new installation in the future?

Jeremy Davis's picture

I'm not 100% sure what we'll end up. It's probably more likely we'll end up with a patch. I still personally hold out for an upgrade path for the TKL appliance. But unfortunatly that is beyond my current ability. And I just don't have time to bang my head against the TKL Gitlab wall anymore...

As for your data, there is no reason to think that migrating the data should be any issue. It is highly likely that migrating the data using TKLBAM will still be possible (even from TKL GitLab to a custom TKL Core with GitLab installed). Failing that I have seen data migration discussions/instructions on the GitLab mailing list so that should be a worst case scenario.

Jeremy Davis's picture

I have done it (sort of...)!

It's perhaps a dirty way of doing it, and I'm certainly not saying it's the best way - but it works! It basically reinstalls GitHub from scratch (with a fresh DB as well). I have included some lines that backup the old GitHub install folder and DB as as I have intention to try to update my GitLab server (that includes data) but haven't got there yet. I also thought then at least if someone doesn't read all this and does it on their existing server (with data) then all wouldn't be lost...

Ideally it'd be nice to create a TKLPatch of this, or at least a nice script that you can download and run, but until then...

Note: I am yet to try this on a server that already includes data. I hope to refine this so it will work with existing data, but for now (unless you want to build on my work) I suggest that if you have existing data, run this on a clean install and then migrate your data across (I haven't tried it but I have read about it on the GitLab wiki).

What it does:

  • updates package lists and upgrades all GitLab dependancies
  • backs up the DB
  • deletes (drops) the DB and recreates an emplty one
  • backs up the github folder (actually renames it)
  • Then does a more-or-less clean install (from the stable GitLab GitHub repo)

You will need your MySQL root password handy (and put it in the 3 times you will be asked).

apt-get update
apt-get install -y wget curl gcc checkinstall libxml2-dev libxslt-dev libcurl4-openssl-dev libreadline6-dev libc6-dev libssl-dev libmysql++-dev make build-essential zlib1g-dev libicu-dev redis-server openssh-server git-core python-dev python-pip libyaml-dev postfix libpq-dev
mysqldump -uroot -p -c --add-drop-table --add-locks --all --quick --lock-tables gitlab_production > gitlab_sql_dump.sql
mysqladmin -uroot -p drop gitlab_production
mysqladmin -uroot -p create gitlab_production
cd /home/gitlab
sudo -u gitlab mv gitlab gitlab-old
sudo gem install charlock_holmes --version '0.6.8'
sudo gem install bundler
sudo -u gitlab git clone -b stable gitlab
cd /home/gitlab/gitlab
sudo -u gitlab cp config/gitlab.yml.example config/gitlab.yml
sudo -u gitlab cp /home/gitlab/gitlab-old/config/database.yml config/database.yml
sudo -u gitlab bundle install --without development test sqlite postgres  --deployment
sudo -u gitlab bundle exec rake gitlab:app:setup RAILS_ENV=production
sudo -u gitlab bundle exec rake gitlab:app:status RAILS_ENV=production
sudo -u gitlab cp config/unicorn.rb.example config/unicorn.rb

And that should do it...!

Now start GitLab:

service gitlab start

And log in... Note that when logging into the WebUI you will need to use the default GitLab login credentials:

Warning: DO NOT RERUN THE FIRSTBOOT SCRIPTS! It will break GitLab!

The next thing I plan to try is just migrating the DB (rather than dumping it and recreating it). It would also be nice to preserve the intial log in details (that you already set up). It'd be super nice to get it working with existing data... (which will rely on not destroying the DB). I'd also like to adjust the firstboot script so it's not destructive.

Bjørn Otto Vasbotten's picture

Gonna get a new VM up and running an try this out. :)

Andrew's picture

Had to do some other things:



sudo gem install charlock_holmes --version '0.6.8'

	root@gitlab gitlab/gitlab# sudo cp ./lib/hooks/post-receive /home/git/.gitolite/hooks/common/post-receive
root@gitlab gitlab/gitlab# sudo chown git:git /home/git/.gitolite/hooks/common/post-receive
root@gitlab gitlab/gitlab# chmod 750 /home/git/.gitolite
root@gitlab gitlab/gitlab# sudo -u gitlab bundle exec rake gitlab:app:status RAILS_ENV=production
Andrew's picture

I meant to write 


sudo gem install charlock_holmes --version '0.6.9'
Bjørn Otto Vasbotten's picture

Update 1:

Seems to be related to those pesky special norwegian characters again, this time the ø in my first name.



Thanks to the command listing from Jeremy and the last lines from andrew


sudo cp ./lib/hooks/post-receive /home/git/.gitolite/hooks/common/post-receive
sudo chown git:git /home/git/.gitolite/hooks/common/post-receive
chmod 750 /home/git/.gitolite
sudo -u gitlab bundle exec rake gitlab:app:status RAILS_ENV=production
(removed the path which could confuse some)
I got it to run on a new VM that i set up for testing.
But, when i log in with the standard credentials Jeremy provided and try to insert a new user i get a 500 error in my browser. 
I tried tailing some log files and got this:
tail -f production.log
Processing by Admin::UsersController#create as HTML
  Parameters: {"utf8"=>"â", "authenticity_token"=>"l7GAssmqupnOtLBy2AAOpf5KLw2a9Q2GqTu4RDHetoY=", "user"=>{"name"=>"Bjørn Otto Vasbotten", "email"=>"", "force_random_password"=>"[FILTERED]", "skype"=>"", "linkedin"=>"", "twitter"=>"", "projects_limit"=>"10", "admin"=>"1"}}
Completed 500 Internal Server Error in 224ms
Redis::InheritedError (Tried to use a connection from a child process without reconnecting. You need to reconnect to Redis after forking.):
  app/observers/user_observer.rb:5:in `after_create'
  app/controllers/admin/users_controller.rb:67:in `block in create'
  app/controllers/admin/users_controller.rb:66:in `create'
Any thoughts?
Bjørn Otto Vasbotten's picture

So, now that we have a new server up and running, its time to start migrating.

Found this thread which should help me:!topic/gitlabhq/lmXEqj_cR4Q

Egor Zindy's picture

Hi there,

I followed Jeremy's instructions and Andrew's updates and got an almost working 3.1 system. What didn't work for me was pushing code to the repository using ssh (I haven't tried http yet).

Users could add their ssh keys through the web interface, but when pushing code to the repositories, git@gitlab was demanding a password. There are loads of reports about this condition, but I think this could be the issue (it is currently being worked on):

I tried to update the keys using gitlab:gitolite:update_keys, but my particular problem (I think) was a configuration issue in /home/.gitolite.rc . There is a hint in the link above about GIT_CONFIG_KEYS and a solution over at further mentions:

[...] When I checked if /home/git/.gitolite.rc for GIT_CONFIG_KEYS I found out that the variable had the value empty string. I had to change it manually:


(I changed the variable names to their v3 conterpart and to reflect what I did).

After that, repositories got created for all my projects (/home/git/repositories/ only had gitolab-admin.git and testing.git subfolders) and the users' public keys were all copied from /home/git/.gitolite/keydir/ into /home/git/.ssh/authorized_keys

Of course it could have been something entirely different that did the trick.......

Bjørn Otto Vasbotten's picture

Hi all, it turns out that I ended up with never doing this upgrade, we have now set up a new VM with the current TKL build, where I could actually follow Gitlabs standard upgrade path so that i now have a fully working Gitlab 6.1 installation running on TKL. :)

Lorenzo's picture

but how can I update my current appliance with all projects inside?

Jeremy Davis's picture

AFAIK the v12.1 has an updated version of GitLab and was designed to be easier to upgrade (although I suggest that you test rather than just take my recollection for granted).

So you may be better off migrating your data to a new server, rather than trying to upgrade your server with your data in there...

Bjørn Otto Vasbotten's picture

Yeah, that was my conclusion also. So far, we have only used Gitlab itself for administrivia like creating users and repos, as well as browsing repos and commits.

So we simply migrated all git repos to the new server, and since we are using the same email addresses and keys for users, all important history is preserved.

So with our 30 repos it was a bit of work to get everyting moved, but nothing that couldnt be handled.

k0nsl's picture

I hope nobody missed this security upgrade:



Jeremy Davis's picture

Thanks for the heads up! :)

Royce's picture

I first found TKL and GitLab in the list of Proxmox containers. It came with GitLab 8-17. When I found some bugs that annoyed me, I began looking to upgrade.

I am upgrading from source. There are many problems with this but I am documenting them and the workarounds needed.

These scripts I have written are specific to TKL GitLab 14 running GitLab 8-17-stable. 

Upgrading to 10-8 works. Currently, I am working on getting 11-4 working.

OnePressTech's picture

I made the decision to use TKLXCore + GitLab Omnibus and have never had an issue. Press the button to upgrade and it is all done for you. That is not the case with a GitLab VM built from source...more complicated upgrade / lifecycle path.

GitLab has changed repeatedly and significantly over the years. I knew that would be the case when I first deployed 3 years ago so I did not go with the TKLX GitLab VM which is based on source and has a more complicated upgrade path if you are not a Ruby on Rails expert.

The only downside to a TKLXCore + GitLab Omnibus is the TKLBAM configuration needs custom configuration because the TKLBAM delta is built from the TKLX-Core image vs. the TKLX-GitLab image so it is a bigger delta without custom TKLBAM configuration..


Tim (Managing Director - OnePressTech)

Royce TheBiker's picture

Looking back on what my upgrade path was like, I would not recommend upgrading TKL GitLab. 

The TKL VM is a great way to introduce people to GitLab, but if you are serious about using it and you don't want it to consume all your time, then Omnibus is the way to go. 

I don't think there is an easy way to switch to Omnibus because it requires changing data engines from MySQL to PostgreSQL. 

I personally will stick with the TKL now that I have learned so much about how it works. Also, my upgrade path changed the data engine to MariaDB and I like what that community is doing. 



Jeremy Davis's picture

TBH, the DB mismatch is my primary concern stopping us from switching to the omnibus package in the immediate future. It's a significant reason why it hasn't happened sooner.

FWIW, GitLab are now sponsoring a (small) team to work on an "official" Debian package (hosted within the "official" Debian repos), so that may be an equally valuable (possibly even better) way for us to go?! It's hosted in backports, which is not completely ideal, but still superior to the current source install IMO.

TBH, whilst it is quite cool software, unless you need all the additional bells and whistles (e.g. built in CI), I think it's overkill for many use cases. I did have a copy running locally (for my own local development purposes) but as I was only using it as a local git server, moving over to Gitea was possibly one of the best things I've done! Gitea is much lighter weight on resources and is super simple to update. I'm sold! :)

OnePressTech's picture

Hey Jed...can you elaborate on why DB is an issue. You support a PostgresQL appliance ( ) so I would be concerned if this is an issue...just saying :-)

Regarding your opinion about a personal Git Gui...well yes...that's not what GitLab is useful for. It is useful where there are 2+ users. So for TKLX, your software would be better managed from than or, better yet a (Omnibus-powered) AWS-hosted GitLab. Now there's an idea...full C.I....killer.

I have my opensource projects on and I manage an AWS-hosted GitLab for a client. Been running both for a couple of years issues. None.

For desktop have a look at GitKraken or just manage Git from your favorite IDE. might consider just doing an export / import to switch to an Omnibus install. I'm not sure you would lose anything in the transition. Just a thought.


Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Apologies if my post was unclear. It's not that there is any problems with Postgres itself! Personally, I'm much more familiar with MySQL/MariaDB, but everything I've read suggests that Postgres is a more featureful RDBMS, and possibly even superior for most use cases. TBH, I'm not even clear why we use MySQL as the DB backend in our GL app. (IIRC at one point it was the one that they recommended?!)

My concern is purely with the swap from one RDBMS to another and the impact it will have on TurnKey users of our GitLab appliance. Whilst the migration process between TurnKey versions (especially minor version releases) can be flawless, as you would almost certainly be aware, there can be issues.

Please note though that if we have a well thought out and tested migration pathway (that at least is well documented, if not automated/semi-automated), then that's not such a big deal, but I just haven't had the time to work on that.

Doing it at a major release bump (e.g. v16.0) may be a reasonable option though as most existing users already understand that major version changes bring in the need for manual tweaks. Although documentation on migration would still be a must.

As something of an aside, I have some personal qualms about the Omnibus package. They're not hard reasons not to use it, just some observations of things I don't really like about it. They should be read in context of the fact that I'm something of a Debian purist.

One thing is that it is one massive (~450MB) monolithic package (rather than being split into "proper" unit packages with a dependency tree). I have some ideas on why they might do that, but I still don't like it. Another thing, is that (somewhat related to my first point) is that it doesn't use any Debian dependencies (other than the base system, everything else is bundled; from git to Postgres). That means that auto secupdates will only apply to the base system, GitLab needs to be manually updated. Don't get me wrong, I understand that from a user perspective, that may be a small price to pay for a system that is generally much easier to maintain, but it makes it quite inconsistent with the rest of the library...

Re your note on GitLab export/import; as you suggest, that could be a potential pathway for our users. It certainly holds promise. Although I do note that the Import/Export module version must be the same for both exporter and importer, so it's not really an option for automated migration. Still, could be part of the documentation! :)

@Royce - If you end up using the GltLab export/import feature, then it might be best to install the same version of GitLab (via their omnibus package - assuming that's possible) as you are currently running. To see which version of Import/Export apply to which versions of GitLab, have a look here. I note that they have packages going quite a while back, you can download them manually here. Regardless, if you go that path, I'd love to hear how you go.

@Tim again - Re GitLab vs Gitea - have you checked out Gitea at all? I'm certainly no GitLab power-user, so perhaps there is something I'm missing?! And obviously if you are using the integrated CI, then it may well not be that attractive (it doesn't have CI built in, but can integrate with external CI as per GH). But otherwise Gitea is pretty much a GitHub clone, written in Go.

It supports multiple users/orgs/teams/etc, provides SSH/HTTPS pushing/cloning, includes wikis, supports pull requests, etc. It's super light weight (my Gitea server is idling at 0% CPU, using 230MB RAM and 1.2GB of HDD - try doing that with GitLab!) and is a breeze to maintain. For a full feature comparison between all the active players in the field, have a look here.

In fairness, I have something of an aversion to web apps written in Ruby (and a love from ones written in Go), so that's no doubt a factor. Although I would argue, that that's (at least partially) grounded in pragmatics. IME Ruby web apps are resource intensive and slow in comparison to web apps with similar feature sets written in different languages. The fact that GitLab has such a massive codebase and is a mishmash of (primarily) Ruby, Node and more recently Go as well, puts me off a bit too. I think their crazy rapid release cycle, plus the major changes that they've implemented suddenly in a minor version release (and sometimes swapped back again, e.g. the Unicorn -> Puma -> Unicorn thing a few years ago) is a bit hard to follow too.

Although having said all that, I do genuinely think that it is quite cool software. And as for everyday usage and ongoing maintenance, I must defer to your greater experience. My experience is likely jaded by the appliance build code maintenance I've been involved in (which would be significantly relieved by switching to a package, beit the Omnibus package or a Debian backports one).

FWIW my Gitea usage is primarily so I have a common repo for my code (I do development on a number of different computers). It also great for sharing code with the small group of people that I share code with.

TBH, for my current use case I could get away with a simple multi-user git server with no GUI. But the maintenance of that vs Gitea is not much less and the ability for new users to set themselves up with no intervention from me is a major plus. And sometimes it nice to be able to visualise what is going on via a browser...

FWIW, other than occasionally browsing code (and interacting with others) I use git exclusively from the commandline and am super happy and comfortable with that. Also I don't use an IDE as such (I'm a vim convert) although I probably should be using fugitive.vim (vim git plugin)...

OnePressTech's picture

To make a decision about having / not having a git repository VM in the TKLX library has much to do with the demands of the TKLX audience...for which I cannot speak, obviously.

So what would a TKLX audience want to do with a git repository!

With and GitHub offering free public and private hosted repositories it is tough to put up any business case for a self-hosted git repository VM other than for privacy, educational, or intellectual property reasons .

The challenge is lifecycle management.

If a VM lifecycle management cost is too high then the TKLX company and TKLX community cannot cost-effectively assist someone deploying that VM in a critical use-case. A shared development repository can certainly fall into the critical use-case category (when it goes down you need it

In a SaaS world, the continued use of self-hosted VMs is typically for cost, privacy, or IP management reasons. There are some other reasons but it would require more time than I have in this post to get into (like ownership of business rules, business knowledge, business processes, commercial competition visibility, etc).

But anyone who takes on the burden of lifecycle managing a TKLX VM (or any supplied VM for that matter) has to decide if the convenience cost of a quick install is outweighed by a massive lifecycle management burden.

GitLab falls into this category.

Although a GitLab VM is arguably no more or less complex than and Odoo or OrangeHRM VM, those are applications and are, for the most part, self-contained.

A Git Repository is actually the centre of a web of interconnected parts. Simple repository...complex interconnection network. Complex repository...simple interconnection network. Gitea is the former, GitLab is the latter.

The Gitea comparison looks good on paper but...

a) GitLab has an integrated end-to-end model ( )

b) GitLab is Issue-Centric. Everything starts, continues, and completes w.r.t. issue. This is different than Gitea. Gitea has issues but is not issue-centric. If you want a reasonable head-on-head comparison it would be GitLab vs. Trac. For value-for-effort I would take GitLab over Trac any day. GitLab is an Issue-tracking system with an integrated issue-resolution model. There is nothing like it.

c) Take a look at GitLab's C.I. integrated support for Docker / Kubernetes. Brain-dead simple. I have a client that is using a C.I. runner to connect with their CRM system via API because the short Python script they wrote to do the job is in the repository and, well, they only needed a few button presses to deploy a scheduled daily on-demand container-based job-run via VULTR.

I don't believe a GitLab-based TKLX VM should be in any other form than an Omnibus install. Yes...this makes it different from other more conventional open-source based TKLX VMs but it is the only way to align this VM's post-install lifecycle management cost profile with the post-install LCM cost profile of, say, a LAMP VM.

Certainly a Gitea VM would align better with the other VMs from an install / post-install LCM profile so might be a better candidate for the TKLX company to manage rather than GitLab.

And for the record, discounting the usual system configuration issues you would expect with any Linux VM-based product, this is the upgrade script I run once a month after they release:

apt-get update
apt-get install gitlab-ce

What's my lifecycle burden for a Gitea VM. Just saying :-)


Your call :-)


Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Thanks for your insights and comparisons Tim. You make an incredibly compelling case (for a switch to the Omnibus install).

Your points re lifecycle management costs are on point and not lost on me! As has been discussed, it would certainly lighten the maintenance load for both us and end users - which can only be a plus for TurnKey. Anything that reduces the time and effort (internally or externally), means we can all do more of something else (hopefully more compelling than updating software)...! ;)

I anticipate that we will certainly look to a packaged GitLab install at some point (hopefully not too far away). Although I'm still not sure which direction we'll go; Omnibus or Debian backports. Personally, I'd prefer to leverage the Debian backports package(s), as it conforms to the "proper" Debian packaging regime - as noted in my previous post; small individual packages with a dependency tree. Using that install path would allay many of my concerns about the "monolithic" Omnibus package, whilst still providing most (if not all) the value you ascribe to the Omnibus install. GitLab itself still wouldn't get auto security updates, but the underlying dependencies would. And updating GitLab would still be a simple apt update && apt install gitlab-ce! ;)

The fact that GitLab are now financially sponsoring a small team within Debian to do the (backports) packaging, adds weight too. A single (no matter how motivated) Debian developer maintaining it is nowhere near as appealing.

The bigger question of exactly how we will manage that transition remains though. It requires some further thought and consideration... I plan to discuss with Alon next week, so hope to make a decision soon, and start working towards the agreed plan soon after that.

FWIW I have been writing a response to your further points, but I need to get on with other stuff. I was just going to post what I'd written, but it's not quite ready for public consumption. It's a bit long winded and repetitive and could possibly be misconstrued as an anti GitLab rant (which it wasn't intended to be). Instead I've cut it back savagely, unfortunately, it no longer addresses many/most of your GitLab related points.

Regarding Gitea, it's worth noting that the primary reason why we built a Gitea appliance (and why I often mention it to users struggling in one way or another with GitLab) is that we've had a number of requests for it, both explicitly (i.e. they want a "Gitea appliance") and generically (i.e. they want something "more" than Revision Control but "less" than GitLab).

Bottom line is that I have no interest in dissing GitLab and I don't suggest Gitea as a "silver bullet" that is unequivocally "better" than GitLab by all metrics (it's clearly not). Although I do stand by the fact that in many use cases, Gitea is a legitimate (if not preferable) option to GitLab, depending on needs, preferences and available resources.

I have no desire to discourage anyone from using GitLab, merely a desire to highlight the Gitea appliance as something that may be a preferable alternative to some users. It certainly fulfils my needs better than GitLab.

Regardless of my personal opinions, while GitLab remains open source, we'll continue to provide a GitLab appliance (hopefully it'll soon be better than the one we currently provide).

Take care mate and thanks again for sharing your insights and experience. I may not always follow your advice/opinion exactly, but do I always appreciate your input and take it into consideration.

Jeremy Davis's picture

Alon and I had an extensive discussion on this recently and we have agreed that despite our mutual reservations, TurnKey (both internally and end users) would be best served by moving to an Omnibus install.

So the next TurnKey GitLab appliance release will switch from the current source install (using MySQL as backend DB) to an Omnibus install (using Postgres - as included in the monolithic Omnibus package).

The change will likely be a pain for existing users, but we should be able to leverage the "Export/Import" functionality of GitLab to allow existing users to migrate their data (TKLBAM will not be a realistic option). And moving forward, they should be able to leverage the ease of upgrade that you describe.

Whilst in many respects, the Debian backports package avoids many of the concerns we have, it doesn't eliminate all our concerns. Concerns remaining with that path include no guarantees of timely security updates to backports, we can't use auto security updates for GitLab (and backported dependencies) there either and backports are not part of LTS (so once the Debian base moves to LTS, the only support path is to upgrade/migrate to newer TurnKey). Also, even then, we can't really guarantee what will happen with that in the longer term future. With all that in mind, Omnibus install seems the most sensible.

We did also consider hacking the Omnibus install to allow us to use as many default Debian packages as possible (rather than all the applications bundled within the monolithic Omnibus package). But we decided against that too in the end because of 2 major reasons. Firstly there may be subtle incompatibilities that may not be immediately obvious (but users may encounter). Even if that isn't the case immediately, it may well develop in the future. Plus it may make future upgrades problematic too.

So despite all our reservations (as noted previously) we've agreed that the best of all the unappealing options is a "default" Omnibus install. I'm not sure when that will be ready, but soon I'm hoping.

OnePressTech's picture

Sounds good...much appreciated...I assume you will use the TKLX Core? (So, using the Omnibus Postgresql install and not installing one ourselves.)

The benefit of using the omnibus on Core is that GitLab techs do all the testing and if there is a problem it can be reported and will likely get actioned. As soon as we deviate or vary the omnibus install we risk voiding an ability to get an issue we raise actioned by the GitLab tech team.

If you feel you would like some configuration changes to the Omnibus installed components (NGINX and POSTGRESQL) you might consider doing post-install configuration & reboot script so we can more easily reproduce an out-of-the-box GitLab equivalent.

The main benefit I am hoping to gain from this exercise, if possible, is a working / optimised TKLBAM. I have not yet sorted out TKLBAM to do it myself. I expect that a base image on the TKLX server is required to really benefit from TKLBAM's incremental backup architecture?

Based on experience to date, post-omnibus-installation configurations relevant to the TKLX VM admin console / wizard  are as follows:

1) Configure GitLab settings to support AWS as external data-store for large data items

2) Confiure GitLab settings for Mattermost connectivity

3) Configure SSL Certificates

4) Configure a RAM-disk swapfile so GitLab does not trigger the dreaded OOM-killer resulting in 500 errors.

W.r.t. SSL Certs, Gitlab now natively supports LetsEncrypt but I expect there will need to be a quick check to ensure there is no overlap with the TKLX OS-level LetsEncrypt process.

I am just starting to use the GitLab Kubernetes CI integration so I don't yet have any thoughts on how a TKLX-GitLab might add value there.

I m happy to be a contributing member once this new VM gets underway. Just let me know when and where I can assist.


Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

I assume you will use the TKLX Core? (So, using the Omnibus Postgresql install and not installing one ourselves.)

Until I dig in and start work on it, I'm not 100% sure exactly how it will go. As far as teh buildcode goes, it won't be a rebase back on Core (it will be a continuation of the current GitLab buildcode repo). But as all appliances are essentially based on Core anyway, it sort of will be based on Core. Sorry, that probably makes limited sense...

In an effort to clarify my last paragraph; the new build use all the contents of the GitLab Omnibus package. E.g. it will include git and Postgres from the Omnibus package (rather than from Debian repos). I think that answers your question better!

If you feel you would like some configuration changes to the Omnibus installed components (NGINX and POSTGRESQL) you might consider doing post-install configuration & reboot script so we can more easily reproduce an out-of-the-box GitLab equivalent.

The way we would generally do that, is that any (new) defaults would be provided via an overlay or (preferably) tweaked via a conf script. But I anticipate that any changes we make would only be changes to GitLab defaults, not radical rewrites or anything. So working on the assumption that it would be nothing particularly significant (i.e. stuff that a "reasonable" somewhat advanced end user may do), I wouldn't expect any changes that we make would have any significance over and above what GitLab would come across with any end user installed system. Thus should not have any impact on any bug reports and/or support via GitLab.

Having said that, assuming that the Omnibus package is built in a sane way (and the defaults are included in the package as text files, then copied into place at install time), documenting how to get the original defaults from the Omnibus package should be pretty straight forward.

The main benefit I am hoping to gain from this exercise, if possible, is a working / optimised TKLBAM. I have not yet sorted out TKLBAM to do it myself. I expect that a base image on the TKLX server is required to really benefit from TKLBAM's incremental backup architecture?

Yes, we will be creating a new TKLBAM profile for the updated appliance (as we do anyway for each release). Moving to the Omnibus install will have the advantage that it should actually work a bit better than the current GitLab TKLBAM profile.

Based on experience to date, post-omnibus-installation configurations relevant to the TKLX VM admin console / wizard are as follows: 1) Configure GitLab settings to support AWS as external data-store for large data items 2) Confiure GitLab settings for Mattermost connectivity 3) Configure SSL Certificates 4) Configure a RAM-disk swapfile so GitLab does not trigger the dreaded OOM-killer resulting in 500 errors.

Hmmm, I'm not sure that we would be configuring too many of those things, initially at least. We could consider adding them as confconsole plugins though. Are these initial options within some sort of GitLab "install/first-run wizard" or something? TBH, I've never seen mention of most of those before...

Re specifics:
1. You mean like git-lfs right? AFAIK git-lfs should already be an option OOTB?! And other than perhaps dependencies (boto?) , I would imagine that it would be relatively minor config required. If you have further info/insight to share in that regard, please do so.

2. Again, isn't that something that comes OOTB? AFAIK the Omnibus package includes Mattermost support already? (Although, perhaps disabled by default?) At least, that's what my recent research lead me to believe...

3. As our implementation is webserver agnostic, I wouldn't expect this to be an issue. If you want a common cert for GitLab, Webmin and Webshell, then you could use the Confconsole plugin we provide. Otherwise if GitLab has some other implementation built in, then using that should not cause any issues. I would potentially expect one to override the other, but I wouldn't expect that to be problematic, although I guess we'll need to see how GitLab do it. And obviously it'll need testing.

4. In my testing (admittedly nothing compared to your first hand experience) a larger server with more RAM is a far superior option on AWS than adding swap. It gives a far superior user experience (RAM is tons faster than any harddrive, even SSD) and as AWS charge for IO (beyond the threshold that I forget OTTOMH), if swap is being used heavily, you may not even be saving any money!

OTOH I guess if you are trying to squeeze every last bit out of a server that is really too small for it's peek usage, then it might be a good thing. I am somewhat hesitant to set that up by default, although again a confconsole plugin could be an option? Having said that, I do note that GitLab recommend 2GB swap, even when using (the recommended) 8GB min RAM so perhaps we should consider it...?! (I think I prefer my lightening fast Gitea server which uses about 256MB RAM! :-p) FWIW all the reading and reviews I've seen recommend a t2.medium as a minimum for GitLab on a single AWS EC2 instance (even for a single developer). Again I'd be interested in your experience.

I'm hoping to get onto this ASAP. Once I have something I am happy to share it, and would love some feedback on it. Although I won't have an AMI, only an ISO initially.

OnePressTech's picture

My comment about using Core was just meant to say that the VM should be Core + Omnibus and not Core + Posgresql + Omnibus.

Regarding the config options 1-2...these are not GitLab OOTB. They are all ready-to-go but you need to do a few things to enable them. I just thought I would put them on the radar for the VM confconsole / wizard. E.g. Mattermost needs a unique domain...could be prompted for in the confconsole / wizard.  AWS requires some credentials and bucket info to be added to multiple places in config files...again...could be prompted for in the confconsole / wizard.

Regarding option 3...the SSL just needs to be sorted. There are a number of places SSL certs need to be added to GitLab config files. Deciding to rely solely on TKLX LetsEncrypt + auto config file updates vs. enabling GitLab native LetsEncrypt is the only thing that needs a bit of thought.

I consider swapdisk to be mandatory. The issue is that GitLab like most (if not all) Linux apps does not moderate its RAM usage. No matter how much RAM you add you risk an OOM kill. The main culprit is C.I. jobs and artifacts. Because jobs can be long running you can have an OOM kill situation on a live system with no reasonable way of addressing it. The RAM disk is insurance. Additionally, AWS has economic flexibility on disk but not RAM. You have to unnecessarily jump an insance size to get more RAM. A decent size GitLab works fine on a Medium T3. Doubling the cost to a large instance is an unnecessary cost for a C.I. triggered short-term RAM overrun. I have already implemented and documented the process for adding a RAM disk. It is quite simple actually (after I did all the research of course).

You bring the VM build knowledge and I will bring the GitLab knowledge. Between the two of us we could have the first pass VM in place within 24 hours and then go from there to decide how much additional effort would be required for release.

I'm ready when you are :-)



Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Ok, this all sounds brilliant. I hope to make a start today, but I have a bit of a backlog, so no promises.

My inclination would be to take a phased approach to this:

  1. Build new GitLab appliance using Omnibus installer - basic testing to ensure all is as it should be (probably the easy part - I have no reason to think that there will be any issues).
  2. Test TKLBAM backup/restore to ensure we're capturing all required data.
  3. Test migration from existing (source install) GitLab server to new (Omnibus install) server (we've also had an offer via email for assistance with that which is cool). I have an idea on how this might be done, but yet to try "in the real world".
  4. Look at optimisations as suggested by you. I really like your suggestions to add value!- these could have a staged rollout if time gets tight (e.g. include some initially, with others added in future releases).

Out of interest, re your swap tests/research; are you using a file or partition? Seeing as most builds have swap by default (it's only AWS AMIs that don't IIRC) my inclination would be to have a confconsole plugin which adds a swap file. There are few advantages to that IMO:

  1. Super easy to implement and modify on the fly (ease of tuning - can be easily adjusted bigger/smaller).
  2. Can easily be disabled/removed if freeing disk space becomes higher priority that providing swap (related to #1).
  3. No low level disk modifications required (safety - reduced risk of inadvertently causing damage due to circumstance outside our control).
  4. Will still be relevant to existing builds (AFAIK you can have both a swap file and a swap partition) - although it could easily be included in AWS only if that is deemed preferable (general applicability).

Alternatively, an additional volume dedicated to swap could be another option? But obviously would require adjustments outside of the server itself. So I think that just documenting that possibility is probably the best way to go.

I'd also be interested in what "swappiness" value you found to be ideal (assuming you played with that?). My inclination would be to set it somewhere between '1' (minimum swap usage - without disabling) and '10' (only swap when RAM usage hits ~90%). FWIW Debian default is '60' (and we don't adjust that). We could make it adjustable by the user (within Confconsole?), but I'm inclined to just set a sane default. We could provide some documentation to point users in the right direction if they want to dig in further.

[update] Actually, I'm not so sure re swappiness now... I've just read a more technical writeup which suggests that setting the ideal value is not as straight forward as I had been lead to believe! It's also worth noting that because unused RAM is used as a disk cache, if the RAM is getting quite full, then that reduces the opportunity for caching. I guess considering that AWS servers have none by default currently anyway, adding swap (even if swappiness is low) will still have value.

OnePressTech's picture

Excellent...I am available to invest time most of next week :-)

Can we set up a phone call? I would like to learn the process of building a TKLX VM, configuring TKLBAM, and customising confconsole. I am happy to do some coding and testing. And I have documented all the configurations we will need. I would like to document the entire process required to build this VM.

FYI- I need to upgrade my client from an M1 Medium to a T2 Medium so this is perfect timing. And since I will need to migrate using Import Export I can document the migration process as well.

Regarding your swapfile question...I used a swapfile. I considered a swap partition a future consideration.

Swappiness should be 10. We can't wait until 100% RAM utilization and risk an OOM kill. 10% is an adequate buffer based on my analysis of previous OOM kills that occured on my GitLab system.

Swapfile / Swap-partition references I short-listed:

- how-do-i-configure-swappiness

- setting-swap-partition-size

- how-to-configure-virtual-memory-swap-file-on-a-vps


My Instructions:

1) Configure Swapiness
- sudo vi /etc/sysctl.conf
- Add the following line:
    vm.swappiness = 10
- You can test this using the following BASH command:
    cat /proc/sys/vm/swappiness
- You can also change swappiness at runtime using the following BASH command:
    sudo sysctl vm.swappiness=10
2) Configure 2GB swapfile (blocksize (a.k.a. bs parameter) should not be too small or too large from an efficiency perspective)
    - cd /
    - dd if=/dev/zero of=/swapfile bs=2048k count=1000
    - mkswap /swapfile
    - swapon /swapfile
    - EDIT /etc/fstab and add the following line so that the swapfile is recreated on reboot:
        /swapfile       swap    swap    auto      0       0
    You can check the swapfile using
    a) swapon -s
    b) free

NOTE: GitLab VM should be defaulted to 20GB disk space and recommended minimum instance size is T2 / T3 Medium.



Tim (Managing Director - OnePressTech)

Jeremy Davis's picture

Thanks (again as per always) for sharing your insight, research and notes. Also your offer of assistance, testing and documentation is warmly welcome - not to mention super awesome! :)

A phone call is fine with me, but ideally I think it'd be best if we actually already have something concrete to discuss. Unfortunately so much time has been soaked up lately with support, plus I have some other appliance updates I really need to push out. Plus this weekend is a long one I just remembered too, so not sure how things will go...

Having said that, I did make a reasonable start on this yesterday and have a cleaned up; very "Core-like" base system with the GitLab repo setup, ready to install omnibus. Next step will be installing the Omnibus package itself and seeing how that goes.

Unless I hit some unexpected issues, that should be possible today. If/when I get to that stage (and it appears to be working to some degree at least), I'll push the buildcode back to GitHub so yourself (and/or others) could start exploring/testing via TKLDev if desired. I may even upload an ISO and Proxmox build (Royce's preference AFAIK) to an AWS bucket for you guys to have a look at. I'm happy to do an OVA build as well if you'd rather that Tim?

I'll keep you posted and happy to schedule a voice chat for some time next week.

[update] As per Royce's suggestion (below), I've started a new thread over here for further disucssion.

OnePressTech's picture

Cheers Jed. The reason for the call is for me to just quickly run through the build protocol and location of the right docs to consolidate. I want to make TKLDev / VM build doco a bit more cohesive.

Have a great long weekend (I'm freelance...we don't get holidays :-)


Tim (Managing Director - OnePressTech)

Royce TheBiker's picture

Having used VMs for over 20 years, back in 1998 when a small company used GPL code to create the kernel accelerator module and gave rise to a new generation of virtualization, I am a big fan of VMs.

That being said, if you are not changing OS or architecture, LXC is better in every way.

  • Faster, like way, way, way faster
  • Less overhead on CPU
  • Closer to RAM
  • Direct hardware access (not emulation)
  • Scales on the fly (adjust resources live)

On the topic of swap, my next home system is going to be either Threadripper or Epyc with multiple M.2 so I can use NVMe for swap. This will effectively give me a terabyte of slow RAM. 

At work we are going to start using HP G10s with Epyc and Optane. These systems are so insanely fast that just one G10 will be able to replace eight of our existing G8 servers, or an entire datacenter of G5s.

I love tech!

And we should change to a new thread because this is so far off topic.





Jeremy Davis's picture

Glad to have you onboard for the ride Royce! (I just realised that my metaphor nicely matches your username/pic! :)

As I just posted above, I was planning on doing a LXC/Proxmox build for you to test out, so we'll be good there. Although as I also said, I'll be pushing the buildcode too, so if you want to have a play with TKLDev, that's also an option. TKLDev needs to run in a VM though, but you can build an ISO in that, and from the ISO you build, generate an LXC/Proxmox build yourself if you go that path. Happy to give you some more TKLDev pointers if you'd like. Please feel free to ask and I can give you some links to have a look at to get you going.

Good call on the new thread too Royce. TBH, I was thinking that myself... So I've just done it. Please post further discussions to the new thread here.

Add new comment