Blog Tags: 

Announcing TurnKey LXC

In a nutshell, LXC (LinuX Containers) can be thought of as the middle ground between a chroot on steroids and a full fledged virtual machine, making it possible to run multiple isolated "containers" on a single host.

There has been quiet a lot of interest in supporting TurnKey on LXC, so I set out to see what it would take.

Plan A, no B, ok C

The initial plan was to generate LXC optimized builds for all appliances and document how to create containers. The problem was that the process to outline in the docs wasn't very streamlined: download build, unpack build, run a couple commands, download example config, tweak config.

With plan A out the window, I moved onto plan B - create an LXC template script to automate the process. The new direction meant there was no need to generate yet another optimized build format, instead we could just patch a current build format (OpenVZ) on the fly.

This was a good plan, but lots of documentation was still required, for example: how to setup LXC, the different networking options, and the different ways to expose containers services.

Plan B was good, but code is better than documentation, moving onto plan C. Create a TurnKey LXC appliance with LXC pre-installed and configured, networking setup for both bridged and NAT, and include convenience scripts and tools for easily exposing container services.

So that's the story, and I'm pleased to announce two outcomes:

TurnKey LXC appliance

We've just released a TurnKey LXC appliance, pre-configured with everything LXC requires, including LXC itself, the TurnKey LXC template, bridge and NAT networking out of the box, dnsmasq providing DHCP and DNS services, apt-cache-ng so containers can share cached package downloads, as well as convenience scripts (nginx-proxy, iptables-nat) and related tools for exposing NAT'ed containers services to the network.

If you're new to LXC, or just want a turnkey LXC host, then this is for you. TurnKey LXC is available in all build formats for bare-metal and virtual environment installations, as well as deployment in the Amazon EC2 cloud.

Generic TurnKey LXC template

The TurnKey LXC template was developed from the get-go to be generic, and should be usable on any Linux distribution that supports LXC (which should be all of them, although we've only tested on Debian Wheezy).

Once LXC is setup on your distro, just download the template and drop it in the lxc templates directory (/usr/share/lxc/templates on Debian), and you should be able to create any of the 100+ TurnKey Linux appliances as LXC containers with a simple command:

lxc-create -n CONTAINER_NAME -t turnkey -- APPLIANCE_NAME -i /root/inithooks.conf

See the usage documentation for further details.


Keep in mind, this is the initial release of TurnKey LXC, so let us know what you think. If you have ideas for improvement, or if you come across any issues (ie. we haven't tested all appliances as of yet), drop us a line.

As with all TurnKey appliances, the source code is available on GitHub and we love pull requests.

You can get future posts delivered by email or good old-fashioned RSS.
TurnKey also has a presence on Google+, Twitter and Facebook.

Comments

Adrian Moya's picture

Kudos once again! It's very cool to have LXC support on TKL at last, I'll have to try this out because it sounds like very well done, specially the preconfigured networking setup part.

Philippe Grassia's picture

Thank you so much for the early christmas gift!!

Now I would suggest a couple things:

mention the inithooks right of the bat in the example command line to create the container.  In my eagerness to try I did not follow the documentation link let alone read it at first and got frustrated for a few minutes. 

Kudos for the apt and web proxy within the appliance. 

Now, for me turnkeylinux has provided a very efficient way to deliver high quality virtual appliances (or even on baremetal) for my testbeds. To that respect the download and patching of the openvz template, while ensuring faster go to market of high quality payloads does not strike me as the most efficient. It is there, reliable, high quality and also quite generic, but I would imagine the next step for the LXC appliance to leverage the download only the minimum needed payload and build the container from the TKLdev system. 

Finally, I have not yet made my mind whether it is competing against or bound to converge with docker (http://docker.io). Any input ?

Anyway, please do not let my "pushing the envelope" comment give a wrong impression: this is an OUTSTANDING release
 

Philippe

 

Alon Swartz's picture

Included inithooks reference

I updated the announcement to reference the inithooks option, thanks!

The first draft of the announcement included inithooks.conf in the cli example, as well as a detailed explanation. It also included the different networking examples and other information, but was basically a rewrite of the documentation - so I decided to just reference the docs and keep the announcement minimal. In hind sight, I should have atleast referenced inithooks as it is required.

OpenVZ vs. dedicated LXC builds

If you look at the template you'll see the patching of the rootfs for LXC is quiet minimal, so it didn't make much sence to have another build type (200+ builds) to save a few lines of code in the template. This might change in the future though, nothing is set in stone.

Integration with TKLDev

On the one side, I anticipate that we'll be integrating the build infrastructure to use LXC instead of vanilla chroots in the future (we've been talking about it for a while). On the other side, supporting different build targets (e.g., lxc, vmdk, etc) in addition to the default iso target will most likely be supported as well...

Docker

I was planning on writing about Docker another time, but seeing as you asked...

What I left out of the announcement was that I originally started looking into supporting TurnKey on Docker. I had working prototypes but I felt I was missing a fundamental understanding on what was going on under the hood, which is when I started looking into vanilla LXC.

Let me just say that the work dotcloud and the community has done with Docker is great, and TurnKey will be supported on Docker. The thing is, Docker is designed for "application or process" containers - for example, running mysql, and only mysql. Docker short-circuits /sbin/init so you can't really "boot" a container like in vanilla LXC, as that is not Dockers use case and understandable.

It is possible to workaround this though, and essentially use Docker for "system" containers, but it's sort of a hack over a hack. That said, there is still a valid use case for using TurnKey on Docker, and as I said above, it's planned.

[update]: TurnKey docker optimized builds are out

bmullan.mail's picture

I like the direction Alon !

I saw the options for downloading the LXC Appliance....  ovf, vmdk, .iso etc  but not an LXC  container itself ?

Have you thought of hosting an LXC container image itself so that for those that already have LXC installed they could just download & start (lxc-start -n) your TK LXC container?

Also, LXC also supports nested LXC ... I've been using that for quite some time and it works well.  So your LXC container "could" host other LXC container's in it or be used to create a multi-tenancy type of system.

Example:

  • company A - gets TKL LXC container
  • company B - gets TKL LXC container

Companies A and B can then separately, using Nested containers, add any number of TKL applications as sub-containers.

A nice benefit of that is that by doing a "lxc-clone" of say Company A's "master" container... you have backed up all sub-containers for Company A (re all of their TKL apps running in the sub-containers).

Stephane Graber describes the few steps to enable nested-lxc in ubuntu.

https://www.stgraber.org/2013/12/21/lxc-1-0-advanced-container-usage/

Brian

 

 

Jeremy Davis's picture

Except of course for this one...! :)

But as Alon says the LXC containers are made from lightly modified OVZ containers. The LXC appliance is designed to download the OVZ image and do the required mods to morph an OVZ container into an LXC container... FYI here is the script that does the work in the TKL LXC appliance...

Jeremy Davis's picture

Great effort guys! I definately have to download this and have a play... Look forward to it.

This is a brilliant piece of capability. My partner is going to hate it :-) Hey honey...I'll just be a minute, just one more line of code and I'll be there to help you with the christmas decorations...what...oh they're all done now...cool!

Merry Christmas boys...thanks for the new toy!

Cheers,

Tim (Managing Director - OnePressTech)

Drew Ruggles's picture

OK, when I read, "... code is better than documentation..." I nearly fell off my chair, laughing. That pretty much summed up all my programmer friends in a nutshell, and explained why I'm usually the one stuck with creating the documentation (I'm not a programmer, but I hang out with them). Looking forward to exploring the LXC capabilities. Great holiday gift idea! Drew

If I created a LXC appliance and installed the following:

1) Container #1 - Wordpress

2) Container #2 - File Server

3) Container #3 - Observium

I would have 3 instances of webmin installed and 2 instances of PHPMyAdmin installed.

Any thoughts on how common tools might be handled differently within a LXC appliance?

Perhaps a Tools appliance and a parameter that could be passed to TKLX appliance make files to not install common tools!

Just a thought :-)

Cheers,

Tim (Managing Director - OnePressTech)

John Carver's picture

I'm very excited about the LXC app.  Last year I wasted three months trying to learn OpenStack to build the foundation for a small business development environment.  After some frustration I came to the conclusion that OpenStack wasn't (then) quite ready, i.e. poor security etc.  I then turned to ProxMox, and over a weekend, was able to get my server setup with virtualization.  Lately, however, I've been concerned about changes to the ProxMox support license.  I also have a client (my church) with an older server that won't support VT and ProxMox.  I already have the TurnKey file app running there and have been adding custom packages for Ubiquiti's Unifi WiFi controller.  Now I'm thinking that I should switch to the LXC app and run the file app and WiFi controller in containers.  I'm already seeing the need to load additional apps but didn't want to have to buy another server just to run ProxMox.

I'm curious about your future plans for the LXC appliance.  I admit that I've gotten spoiled by the ProxMox gui and frustrated by its java-console.  I've already tried unsuccessfully to install LXC Web Panel (lxc-webpanel dot github dot io) on the TurnKey LXC app.  When they say they don't support Debian, they mean it. Another possibility is the community edition of OpenQRM (openqrm-enterprise dot com).  Have you looked at these or other candidates for a web front end for LXC?

ProxMox's use of a java based virtual console has been exceedingly frustrating.  They are switching to SPICE, but that requires a client on the user device, something else I'm not happy about.  The one project I've found, so far, that looks like what I want is Guacamole - HTML5 Clientless Remote Desktop (guac-dev dot org).  If you have work underway, I'll wait to see what happens, otherwise I'm willing to try to add one or both projects to the LXC app.

Did I mention I was excited about the LXC app?  Over the holidays, I pushed the beta1 version of a TurnKey Ansible app (github dot com/Dude4Linux/ansible).  I couldn't figure out how to initiate a pull-request for a new appliance, so consider this the announcement.  If anyone wants to try this, but doesn't have a TKLdev setup, let me know and I'll post an iso.

I got interested in CI when I attended a session, DIY Continuous Integration, by Allan Chappell @general_redneck, generalredneck dot com at last August's DrupalCorn Camp.  Then Mike Minecki of Four Kitchens tipped me off to Ansible, a radically simple competitor to Puppet and Chef.  I'm still feeling my way along learning CI and how the pieces fit together.  Ansible (and Puppet and Chef) typically work with Vagrant and VirtualBox to create and provision virtual hosts for testing or deployment.  Vagrant now has the ability to work with LXC containers in addition to VMware and VirtualBox.  It may also be possible to bypass Vagrant by creating an Ansible module for the LXC app to allow automated creation and destruction of test containers (CTs).  The GitLab and Jenkins apps also have a role in CI.  I debated whether it was better to put all the CI applications together in a single appliance, or keep them separate.  Now with the release of the LXC app, I'm convinced that separate is the way to go, keeping each application in a container and the overhead of doing so is minimal.  I would like to see TurnKey apps become aware of one-another i.e. launch an Ansible container, and it would find the LXC, GitLab, and Jenkins containers and configure itself accordingly.  Anyone else think this would be a good idea?

Tim has a good question about Webmin.  Fortunately, there is a GPL version of Virtualmin which could be integrated into the LXC appliance and configured to act as a front-end for all the Webmin modules running in the containers.  I guess it's up to Alon and Liraz to decide when/if they want to tackle this.

PS: I had to modify the cut/pasted links to get past the Spam filter.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

I had been led by the HowToForge article, "How To Install OpenQRM 4.7 With LXC Containers In Debian Squeeze/Lenny", into thinking that OpenQRM was a web front-end for managing LXC containers.  After digging deeper into the OpenQRM website, it appears that it is much more than just a front-end.  It might be a candidate for a stand-alone CI appliance competing against Puppet, Chef, or Ansible, but it's not appropriate IMHO for adding to the LXC app.

Information is free, knowledge is acquired, but wisdom is earned.

Liraz Siri's picture

Thanks for taking the time to share the results of your research! I hadn't previously heard of Ansible before but it sounds very interesting. Hopefully I'll be able to take a closer look once I finish putting out various fires.

I'm hoping Alon will have time to weigh in on the discussion as well.

I had more to say but triggered the SPAM filter for some unknown reason! I've emailed it to the Web Master.

Edit by Alon, Tim's comment via email

I have been considering TKLDev vs. Ansible vs. SaltStack vs. Puppet vs. Chef
vs. CFEngine in conjunction with TKLX / LXC / Docker. Seriously powerful
capabilities.

From my perspective TKLDev is the simplest for simple DevOps tasks. Ansible
and SaltStack are probably better for simple but rich DevOps tasks and
Puppet, Chef, and CFEngine are for those that want to impress people with
their cleverness at DevOps parties (just kidding). Puppet / Chef complexity
stems from their Google / Amazon / IBM cloud & hosting management
pedigree...useful power for mid-to-large companies but a budget stretch to
maintain for SMBs.

I would be interested in Liraz and Alon thoughts on complementing TklDev
with a formalised DevOps environment.

My vote would be for SaltStack mainly because it targets the same
demographic as TKLX, has a formal commitment to a full opensource future
(Ansible is already charging for their Web Client), has a broad DevOps
functional design footprint, and supports both Agent-based and Agent-less
environments. I like the simplicity of Agent-less solutions (SSH) but prefer
the security of Agent-based solutions (server-initiated connection to home
base).

Perhaps DevOps should be a separate blog thread for discussion though I
think these entries related to LXC DevOps should remain here. The
opportunity to maintain staging and deployment containers in a single VM to
minimise cloud costs is a distinctly LXC / Docker capability. Using DevOps
tools to maintain these containers just puts the icing on the cake.

Cheers,

Tim (Managing Director - OnePressTech)

John Carver's picture

The spam filter is currently blocking any embedded links, including those automatically converted into a link by Drupal.  I had to edit my links to remove the http and substituting 'dot' for '.' :)

Information is free, knowledge is acquired, but wisdom is earned.

Liraz Siri's picture

Sorry about the issues with the spam filter. I'm not sure why it's going haywire. I changed it from text analysis to CAPTCHA mode, which should hopefully save on the arbitrary spam rejections.

If you still run into issues, let me know.

John Carver's picture

Looks like the changes worked.  I can now post a link to AnsibleWorks, http://www.ansibleworks.com/ and SaltStack, http://www.saltstack.com/.

Let's take Tim's suggestion and move the DevOps discussion to a different thread and keep discussing the LXC appliance here.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

It looks like the link-conversion filter only runs on the first preview.  I started with just the first link to AnsibleWorks and hit preview to see if it was blocked (it wasn't).  Then I edited the comment and added the link to SaltStack and hit preview again.  This time I didn't notice that the second link had not been converted before clicking on save.  Always a mystery how Drupal works.  Liraz, you might want to take a look at the order in which the filters are applied.  That sometimes leads to wierd results.

Information is free, knowledge is acquired, but wisdom is earned.

Pierre Lemay's picture

Great work, the lxc-template works fine on a regular debian wheezy lxc enabled host for your information. But why not add squeeze as an available version in the script? Many appliances are still at version 12.x with squeeze. I added and it worked fine for what I did.

get_image_name() {

    app_name=$1
    app_version=$2
    app_arch=$3

    tkl_version=$(echo $app_version | cut -d "-" -f 1)
    deb_release=$(echo $app_version | cut -d "-" -f 2)
    case "$deb_release" in
        squeeze) deb_version="debian-6";;
        wheezy)  deb_version="debian-7";;
        jessie)  deb_version="debian-8";;
        *)       fatal "debian release not recognized: $deb_release"
    esac

    name="${deb_version}-turnkey-${app_name}_${tkl_version}-1_${app_arch}.tar.gz"
    echo $name
}
Liraz Siri's picture

Hey, that's an awesome idea! Nudge nudge, Alon, nudge nudge.

Alon Swartz's picture

I've added squeeze as a valid debian version to the template - reference commit

Thanks!!

Hans Harder's picture

great release.... too bad I see it just now...

 

Perhaps a suggestion, if people start/stop often containers, its is easy to adapt the container config file with pre and post hooks script. That way whenever you start a container you can do the necessary nginx-proxy and additional nat rules, same if you stop a container.

It is only available with lxc 1.0 :

See : https://www.stgraber.org/2013/12/23/lxc-1-0-some-more-advanced-container...

With older versions you can write a wrapper around the lxc-start, lxc-stop

 

See the other topics on storage if you want to use cloning, or using the api with python or c

QUOTE:  ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol

bmullan's picture

Just fyi but LXC 1.0 has finally been released after what seems forever in development.

Stephane Graber has written a really great 10 part blog series on the new capabilities including support for Unprivileged Containers.    I would think some of the new 1.0 features could be very useful to TKL's LXC support & deployment efforts....

https://www.stgraber.org/2013/12/20/lxc-1-0-blog-post-series/

 

Jeremy Davis's picture

Hopefully Debian will provide a backported version of LXC 1.0 for Wheezy. Otherwise we might have to wait for now... Or I guess we could consider installing direct from upstream... I guess it depends on whether Alon thinks that the features available in 1.0 outweigh the efforts to maintain an upstream install...

Derek Andrews's picture

Have not used Unix for 25 years, and therefore (very) rusty (and then only a beginner then) --- things have moved on a bit ... .

Successfully installed turnkey fileserver and bittorrent sync on a HP microserver to have my own cloud; got it working in about an hour --- turnkey to me is "how else would you do it"!

Flushed with success, I installed LXC and then tried to install fileserver in a container; all went well until

root@lxc # lxc-start -n fileserver -d
lxc-start: no configuration file for '/sbin/init' (may crash the host)

searching the web to find out what /sbin/init should look like produced no useful information (one blog had a conversation roughly of the form "I see what goes in the file ...." but no listing!

What do I need to do?

Thanking you in anticipation.

p.s. excellent tool: again how else would you do it (used IBM VM system 30 odd years ago, VMs are excellent!

bmullan's picture

You didn't mention what Linux Distro you were using? You also provided the LXC-START command you used but prior to using that command you did use the lxc-create command to create the container first? # first create the container using the "ubuntu" template (or change the -t to name a diff template (debian, etc) $ sudo lxc-create -t ubuntu -n fileserver # then start the container $ sudo lxc-start -n fileserver -d # after the container starts you should be at the CLI login prompt for that new container
bmullan's picture

sorry my reply didn't keep its paragraph formatting but hopefully you can follow the fact that you have first "create" the LXC container before you can "start" the LXC container.
Ivo's picture

Hi,

I've installed the appliance in a vbox (vbox bridged network).

if I follow the wordpress(NAT) example I end up with a working wordpress container.

However after a reboot the appliance doesn't work as expected.

the local ip of the LXC shows the webpage but I'm unable to use webmin or webshell.

After SSHing in and trying to restart nginx it throws me an error:

Starting nginx: nginx: [emerg] host not found in upstream "wp1" in /etc/nginx/sites-enabled/www.example.com:7
nginx: configuration file /etc/nginx/nginx.conf test failed

even if I start the wp1 container by hand -> lxc-start -n wp1 -d

Any ideas?

otherwise this is awsome

Jeremy Davis's picture

I really need to get onto this and have a play with LXC containers, but haven't done yet... Probably won't get a chance in the next week or 2 either. FWIW as soon as I have time I will make it a priority to check this out...

enamch's picture

Hi,

 

I have the same problem. Did you manage to resolve this issue?

 

 

John Carver's picture

I'm trying to setup LXC on my laptop so that I can run a copy of my website without needing Internet access.  We're leaving for vacation and will attend two reunions where I want to have my genealogy web available, but neither site has Internet access.

I have LXC 1.0.4 installed and running on Ubuntu 14.04.  I can do an lxc-create using the debian template with no problems, but when I try the lxc-turnkey template I get

root@laptop:~# lxc-create -n drupal7 -t turnkey -- drupal7 -i /root/turnkey/inithooks.conf
getopt: unrecognized option '--rootfs=/var/lib/lxc/drupal7/rootfs'
lxc_container: container creation template for drupal7 failed
lxc_container: Error creating container drupal7

Any thoughts on where I should look for a solution?  Or should I just give up and install VirtualBox?

PS: I also have lxctl installed.  It seems to be having some issues with the latest version of lxc.  Could this be part of the problem?

Information is free, knowledge is acquired, but wisdom is earned.

brian mullan's picture

john I use lxc more than TKL but I looked at TurnKey's lxc documentation: https://github.com/turnkeylinux-apps/lxc/tree/master/docs the format for their lxc-create command is a little different from yours yours: lxc-create -n drupal7 -t turnkey -- drupal7 -i /root/turnkey/inithooks.conf theirs (for wordpress but I would assume Drupal7 would/should look same) lxc-create -n wp1 -t turnkey -- wordpress -i /root/inithooks.conf -l natbr0 -x http://192.168.121.1:3124 In your's you have the inithooks.conf file in an extra subdirectory under /root called /root/turnkey the TKL LXC documentation just shows the inithooks in /root/ Does /root/turnkey exist? As far as LXCTL... I have never used it. Looking at the GITHUB site for LXCTL though it looks like there has been no work done on it in the last 2 years (at least no dates have changed on contribution updates). So just to eliminate it as a possibility you might want to purge it ... you could always reinstall later.
brian mullan's picture

also john you didn't post it but what does your inithooks file look like? The error seems to point to some "option" listed in that file.
John Carver's picture

Brian, The file /root/turnkey/inithooks.conf does exist and was created by me as a copy of the example given in the TKL documentation.  It contains exports of the passwords and other settings so I won't repeat it here.

I tested the same inithooks.conf on an actual TurnKey LXC appliance and the command worked fine.  The difference seems to be the version of LXC.  The TKL app uses LXC ver. 0.8 where I'm running 1.0.4 on my laptop.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

After doing some investigation, the problem is that the newer version of lxc-create is passing the parameter '--rootfs=/var/lib/lxc/<container_name>/rootfs' to the template.  The rootfs may not always be at the default location so the template must be prepared to handle other locations.

The second issue is that on Ubuntu, the default bridge name is 'lxcbr0' and not 'br0'.  Other variations are possible on other distributions.

A third issue came up during the testing. Errors were occurring because the locale configuration was not being updated to match the host when running on Ubuntu.

I borrowed some code from the 1.04 version of the lxc-debian template to update the lxc-turnkey template and then used it to successfully install Drupal7 on my Ubuntu 14.04 laptop.  I also tested the changes on the TurnKey LXC appliance to make sure they did not cause problems there.

I issued a pull request at https://github.com/turnkeylinux-apps/lxc/pull/2

Information is free, knowledge is acquired, but wisdom is earned.

Jonathan's picture

Does this LXC container still work? The LXC appliance hasn't been updated since 2003. Proxmox just released their 4.0 version which supports LXC instead of OpenVZ. I don't use containers a lot, but they can be convenient at times.

Jeremy Davis's picture

And although it hasn't been updated since then; it should still work. John is currently spearheading the development of an updated version for v14.0 though which hopefully we will wrap up and publish soon.

But reading your question, it sounds like you are wanting to run TurnKey containers on Proxmox, rather than installing an LXC host (that's what our LXC appliance is). As of yesterday, you should be able to access the updated v14.0 containers from within the Proxmox webUI. Have a look at the release announcement here (Proxmox specifics about halfway down).

John Carver's picture

AFAIK the 13.0 LXC appliance still works fine with the 13.0 openvz images.  Jeremy has been working on a new set of 14.0 appliances that are compatible with Proxmox 4.0.  The release of the 14.0 LXC appliance has been delayed to make sure that it supports the same images as Proxmox.  Rest assured we're hard at work making sure the new version is even better than the first.  I'm trying to make sure that LXC works well with the new Ansible appliance for a small business devops team.

If you have a tkldev setup, you can download and try my experimental version at https://github.com/Dude4Linux/lxc/tree/update-for-14.0-release.  If you do, and find any problems please let me know.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Thanks again for your great work on updating the LXC appliance! And thanks for posting the updated info.

Pages

Post new comment