You are here
Announcing TurnKey LXC
In a nutshell, LXC (LinuX Containers) can be thought of as the middle ground between a chroot on steroids and a full fledged virtual machine, making it possible to run multiple isolated "containers" on a single host.
There has been quiet a lot of interest in supporting TurnKey on LXC, so I set out to see what it would take.
Plan A, no B, ok C
The initial plan was to generate LXC optimized builds for all appliances and document how to create containers. The problem was that the process to outline in the docs wasn't very streamlined: download build, unpack build, run a couple commands, download example config, tweak config.
With plan A out the window, I moved onto plan B - create an LXC template script to automate the process. The new direction meant there was no need to generate yet another optimized build format, instead we could just patch a current build format (OpenVZ) on the fly.
This was a good plan, but lots of documentation was still required, for example: how to setup LXC, the different networking options, and the different ways to expose containers services.
Plan B was good, but code is better than documentation, moving onto plan C. Create a TurnKey LXC appliance with LXC pre-installed and configured, networking setup for both bridged and NAT, and include convenience scripts and tools for easily exposing container services.
So that's the story, and I'm pleased to announce two outcomes:
TurnKey LXC appliance
We've just released a TurnKey LXC appliance, pre-configured with everything LXC requires, including LXC itself, the TurnKey LXC template, bridge and NAT networking out of the box, dnsmasq providing DHCP and DNS services, apt-cache-ng so containers can share cached package downloads, as well as convenience scripts (nginx-proxy, iptables-nat) and related tools for exposing NAT'ed containers services to the network.
If you're new to LXC, or just want a turnkey LXC host, then this is for you. TurnKey LXC is available in all build formats for bare-metal and virtual environment installations, as well as deployment in the Amazon EC2 cloud.
Generic TurnKey LXC template
The TurnKey LXC template was developed from the get-go to be generic, and should be usable on any Linux distribution that supports LXC (which should be all of them, although we've only tested on Debian Wheezy).
Once LXC is setup on your distro, just download the template and drop it in the lxc templates directory (/usr/share/lxc/templates on Debian), and you should be able to create any of the 100+ TurnKey Linux appliances as LXC containers with a simple command:
lxc-create -n CONTAINER_NAME -t turnkey -- APPLIANCE_NAME -i /root/inithooks.conf
See the usage documentation for further details.
Keep in mind, this is the initial release of TurnKey LXC, so let us know what you think. If you have ideas for improvement, or if you come across any issues (ie. we haven't tested all appliances as of yet), drop us a line.
As with all TurnKey appliances, the source code is available on GitHub and we love pull requests.
Comments
Very cool
Kudos once again! It's very cool to have LXC support on TKL at last, I'll have to try this out because it sounds like very well done, specially the preconfigured networking setup part.
Thanks for the feedback!
Included inithooks reference
I updated the announcement to reference the inithooks option, thanks!
The first draft of the announcement included inithooks.conf in the cli example, as well as a detailed explanation. It also included the different networking examples and other information, but was basically a rewrite of the documentation - so I decided to just reference the docs and keep the announcement minimal. In hind sight, I should have atleast referenced inithooks as it is required.
OpenVZ vs. dedicated LXC builds
If you look at the template you'll see the patching of the rootfs for LXC is quiet minimal, so it didn't make much sence to have another build type (200+ builds) to save a few lines of code in the template. This might change in the future though, nothing is set in stone.
Integration with TKLDev
On the one side, I anticipate that we'll be integrating the build infrastructure to use LXC instead of vanilla chroots in the future (we've been talking about it for a while). On the other side, supporting different build targets (e.g., lxc, vmdk, etc) in addition to the default iso target will most likely be supported as well...
Docker
I was planning on writing about Docker another time, but seeing as you asked...
What I left out of the announcement was that I originally started looking into supporting TurnKey on Docker. I had working prototypes but I felt I was missing a fundamental understanding on what was going on under the hood, which is when I started looking into vanilla LXC.
Let me just say that the work dotcloud and the community has done with Docker is great, and TurnKey will be supported on Docker. The thing is, Docker is designed for "application or process" containers - for example, running mysql, and only mysql. Docker short-circuits /sbin/init so you can't really "boot" a container like in vanilla LXC, as that is not Dockers use case and understandable.
It is possible to workaround this though, and essentially use Docker for "system" containers, but it's sort of a hack over a hack. That said, there is still a valid use case for using TurnKey on Docker, and as I said above, it's planned.
[update]: TurnKey docker optimized builds are out
LXC Appliance
I like the direction Alon !
I saw the options for downloading the LXC Appliance.... ovf, vmdk, .iso etc but not an LXC container itself ?
Have you thought of hosting an LXC container image itself so that for those that already have LXC installed they could just download & start (lxc-start -n) your TK LXC container?
Also, LXC also supports nested LXC ... I've been using that for quite some time and it works well. So your LXC container "could" host other LXC container's in it or be used to create a multi-tenancy type of system.
Example:
Companies A and B can then separately, using Nested containers, add any number of TKL applications as sub-containers.
A nice benefit of that is that by doing a "lxc-clone" of say Company A's "master" container... you have backed up all sub-containers for Company A (re all of their TKL apps running in the sub-containers).
Stephane Graber describes the few steps to enable nested-lxc in ubuntu.
https://www.stgraber.org/2013/12/21/lxc-1-0-advanced-container-usage/
Brian
Currently there is no LXC build downloads as such
Except of course for this one...! :)
But as Alon says the LXC containers are made from lightly modified OVZ containers. The LXC appliance is designed to download the OVZ image and do the required mods to morph an OVZ container into an LXC container... FYI here is the script that does the work in the TKL LXC appliance...
Once again... Great work! :)
Great effort guys! I definately have to download this and have a play... Look forward to it.
What can I say that hasn't been said before...superb.
This is a brilliant piece of capability. My partner is going to hate it :-) Hey honey...I'll just be a minute, just one more line of code and I'll be there to help you with the christmas decorations...what...oh they're all done now...cool!
Merry Christmas boys...thanks for the new toy!
Cheers,
Tim (Managing Director - OnePressTech)
Congratulations on the new TKL LXC appliance
Dumb LXC Question...
If I created a LXC appliance and installed the following:
1) Container #1 - Wordpress
2) Container #2 - File Server
3) Container #3 - Observium
I would have 3 instances of webmin installed and 2 instances of PHPMyAdmin installed.
Any thoughts on how common tools might be handled differently within a LXC appliance?
Perhaps a Tools appliance and a parameter that could be passed to TKLX appliance make files to not install common tools!
Just a thought :-)
Cheers,
Tim (Managing Director - OnePressTech)
LXC Roadmap
I'm very excited about the LXC app. Last year I wasted three months trying to learn OpenStack to build the foundation for a small business development environment. After some frustration I came to the conclusion that OpenStack wasn't (then) quite ready, i.e. poor security etc. I then turned to ProxMox, and over a weekend, was able to get my server setup with virtualization. Lately, however, I've been concerned about changes to the ProxMox support license. I also have a client (my church) with an older server that won't support VT and ProxMox. I already have the TurnKey file app running there and have been adding custom packages for Ubiquiti's Unifi WiFi controller. Now I'm thinking that I should switch to the LXC app and run the file app and WiFi controller in containers. I'm already seeing the need to load additional apps but didn't want to have to buy another server just to run ProxMox.
I'm curious about your future plans for the LXC appliance. I admit that I've gotten spoiled by the ProxMox gui and frustrated by its java-console. I've already tried unsuccessfully to install LXC Web Panel (lxc-webpanel dot github dot io) on the TurnKey LXC app. When they say they don't support Debian, they mean it. Another possibility is the community edition of OpenQRM (openqrm-enterprise dot com). Have you looked at these or other candidates for a web front end for LXC?
ProxMox's use of a java based virtual console has been exceedingly frustrating. They are switching to SPICE, but that requires a client on the user device, something else I'm not happy about. The one project I've found, so far, that looks like what I want is Guacamole - HTML5 Clientless Remote Desktop (guac-dev dot org). If you have work underway, I'll wait to see what happens, otherwise I'm willing to try to add one or both projects to the LXC app.
Did I mention I was excited about the LXC app? Over the holidays, I pushed the beta1 version of a TurnKey Ansible app (github dot com/Dude4Linux/ansible). I couldn't figure out how to initiate a pull-request for a new appliance, so consider this the announcement. If anyone wants to try this, but doesn't have a TKLdev setup, let me know and I'll post an iso.
I got interested in CI when I attended a session, DIY Continuous Integration, by Allan Chappell @general_redneck, generalredneck dot com at last August's DrupalCorn Camp. Then Mike Minecki of Four Kitchens tipped me off to Ansible, a radically simple competitor to Puppet and Chef. I'm still feeling my way along learning CI and how the pieces fit together. Ansible (and Puppet and Chef) typically work with Vagrant and VirtualBox to create and provision virtual hosts for testing or deployment. Vagrant now has the ability to work with LXC containers in addition to VMware and VirtualBox. It may also be possible to bypass Vagrant by creating an Ansible module for the LXC app to allow automated creation and destruction of test containers (CTs). The GitLab and Jenkins apps also have a role in CI. I debated whether it was better to put all the CI applications together in a single appliance, or keep them separate. Now with the release of the LXC app, I'm convinced that separate is the way to go, keeping each application in a container and the overhead of doing so is minimal. I would like to see TurnKey apps become aware of one-another i.e. launch an Ansible container, and it would find the LXC, GitLab, and Jenkins containers and configure itself accordingly. Anyone else think this would be a good idea?
Tim has a good question about Webmin. Fortunately, there is a GPL version of Virtualmin which could be integrated into the LXC appliance and configured to act as a front-end for all the Webmin modules running in the containers. I guess it's up to Alon and Liraz to decide when/if they want to tackle this.
PS: I had to modify the cut/pasted links to get past the Spam filter.
Information is free, knowledge is acquired, but wisdom is earned.
More on OpenQRM
I had been led by the HowToForge article, "How To Install OpenQRM 4.7 With LXC Containers In Debian Squeeze/Lenny", into thinking that OpenQRM was a web front-end for managing LXC containers. After digging deeper into the OpenQRM website, it appears that it is much more than just a front-end. It might be a candidate for a stand-alone CI appliance competing against Puppet, Chef, or Ansible, but it's not appropriate IMHO for adding to the LXC app.
Information is free, knowledge is acquired, but wisdom is earned.
Hadn't come across Ansible before...
I'm hoping Alon will have time to weigh in on the discussion as well.
I second John's interest in TKLX DevOps w.r.t. LXC
I had more to say but triggered the SPAM filter for some unknown reason! I've emailed it to the Web Master.
Edit by Alon, Tim's comment via email
Cheers,
Tim (Managing Director - OnePressTech)
SPAM Filter
The spam filter is currently blocking any embedded links, including those automatically converted into a link by Drupal. I had to edit my links to remove the http and substituting 'dot' for '.' :)
Information is free, knowledge is acquired, but wisdom is earned.
I tweaked the spam filter configuration
If you still run into issues, let me know.
Testing SPAM filter
Looks like the changes worked. I can now post a link to AnsibleWorks, http://www.ansibleworks.com/ and SaltStack, http://www.saltstack.com/.
Let's take Tim's suggestion and move the DevOps discussion to a different thread and keep discussing the LXC appliance here.
Information is free, knowledge is acquired, but wisdom is earned.
Humm??
It looks like the link-conversion filter only runs on the first preview. I started with just the first link to AnsibleWorks and hit preview to see if it was blocked (it wasn't). Then I edited the comment and added the link to SaltStack and hit preview again. This time I didn't notice that the second link had not been converted before clicking on save. Always a mystery how Drupal works. Liraz, you might want to take a look at the order in which the filters are applied. That sometimes leads to wierd results.
Information is free, knowledge is acquired, but wisdom is earned.
Hey, that's an awesome idea!
Hey, that's an awesome idea! Nudge nudge, Alon, nudge nudge.
Done
I've added squeeze as a valid debian version to the template - reference commit
Thanks!!
great release.... too bad I
great release.... too bad I see it just now...
Perhaps a suggestion, if people start/stop often containers, its is easy to adapt the container config file with pre and post hooks script. That way whenever you start a container you can do the necessary nginx-proxy and additional nat rules, same if you stop a container.
It is only available with lxc 1.0 :
See : https://www.stgraber.org/2013/12/23/lxc-1-0-some-more-advanced-container...
With older versions you can write a wrapper around the lxc-start, lxc-stop
See the other topics on storage if you want to use cloning, or using the api with python or c
QUOTE: ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol
Thanks!
Hopefully Debian will provide a backported version of LXC 1.0 for Wheezy. Otherwise we might have to wait for now... Or I guess we could consider installing direct from upstream... I guess it depends on whether Alon thinks that the features available in 1.0 outweigh the efforts to maintain an upstream install...
Sorry I don't have anything to add...
I really need to get onto this and have a play with LXC containers, but haven't done yet... Probably won't get a chance in the next week or 2 either. FWIW as soon as I have time I will make it a priority to check this out...
lxc-turnkey template on LXC 1.0.4 with Ubuntu 14.04
I'm trying to setup LXC on my laptop so that I can run a copy of my website without needing Internet access. We're leaving for vacation and will attend two reunions where I want to have my genealogy web available, but neither site has Internet access.
I have LXC 1.0.4 installed and running on Ubuntu 14.04. I can do an lxc-create using the debian template with no problems, but when I try the lxc-turnkey template I get
Any thoughts on where I should look for a solution? Or should I just give up and install VirtualBox?
PS: I also have lxctl installed. It seems to be having some issues with the latest version of lxc. Could this be part of the problem?
Information is free, knowledge is acquired, but wisdom is earned.
lxc-turnkey template on LXC 1.0.4 with Ubuntu 14.04
Brian, The file /root/turnkey/inithooks.conf does exist and was created by me as a copy of the example given in the TKL documentation. It contains exports of the passwords and other settings so I won't repeat it here.
I tested the same inithooks.conf on an actual TurnKey LXC appliance and the command worked fine. The difference seems to be the version of LXC. The TKL app uses LXC ver. 0.8 where I'm running 1.0.4 on my laptop.
Information is free, knowledge is acquired, but wisdom is earned.
lxc-turnkey template on LXC 1.0.4 with Ubuntu 14.04
After doing some investigation, the problem is that the newer version of lxc-create is passing the parameter '--rootfs=/var/lib/lxc/<container_name>/rootfs' to the template. The rootfs may not always be at the default location so the template must be prepared to handle other locations.
The second issue is that on Ubuntu, the default bridge name is 'lxcbr0' and not 'br0'. Other variations are possible on other distributions.
A third issue came up during the testing. Errors were occurring because the locale configuration was not being updated to match the host when running on Ubuntu.
I borrowed some code from the 1.04 version of the lxc-debian template to update the lxc-turnkey template and then used it to successfully install Drupal7 on my Ubuntu 14.04 laptop. I also tested the changes on the TurnKey LXC appliance to make sure they did not cause problems there.
I issued a pull request at https://github.com/turnkeylinux-apps/lxc/pull/2
Information is free, knowledge is acquired, but wisdom is earned.
The LXC appliance was release in 2013
But reading your question, it sounds like you are wanting to run TurnKey containers on Proxmox, rather than installing an LXC host (that's what our LXC appliance is). As of yesterday, you should be able to access the updated v14.0 containers from within the Proxmox webUI. Have a look at the release announcement here (Proxmox specifics about halfway down).
AFAIK the 13.0 LXC appliance still works fine
AFAIK the 13.0 LXC appliance still works fine with the 13.0 openvz images. Jeremy has been working on a new set of 14.0 appliances that are compatible with Proxmox 4.0. The release of the 14.0 LXC appliance has been delayed to make sure that it supports the same images as Proxmox. Rest assured we're hard at work making sure the new version is even better than the first. I'm trying to make sure that LXC works well with the new Ansible appliance for a small business devops team.
If you have a tkldev setup, you can download and try my experimental version at https://github.com/Dude4Linux/lxc/tree/update-for-14.0-release. If you do, and find any problems please let me know.
Information is free, knowledge is acquired, but wisdom is earned.
Great work John! :)
List of LXC container names
Is there a list of the LXC container names that can be used with LXC.
I have successfully worked with Wordpress, Dokuwiki, Jenkins and Odoo but would like to experiment with some of the other appliances in LXC like Mayan, LAMP and the File-server
List of LXC container names
Actually any TurnKey GNU/Linux appliance except LXC and TKLDev should run in a container. I've run them programmatically using the Ansible appliance to automate security testing. LXC won't run in a container because it doesn't yet support nested containers and TKLDev has issues running fab-chroot inside a container. Other than those, I'm not aware of any issues. If you run across any, please open a ticket.
Information is free, knowledge is acquired, but wisdom is earned.
As noted by John, they should pretty much all work!
As noted by John, other than those 2 exceptions, they should pretty much all work!
As he also said, if not, please let us know!
Having said that, there are some further limitations, which may or may not apply to your host. v15.x appliances would only run in privileged containers by default (although there was an easy workaround). On some platforms (e.g. Proxmox), the v16.0 ones will only run properly in unprivileged containers. Apparently if you hit that issue (and need/want to run a privileged container) then you can enable "nested" LXC mode for that particular guest (in the container's LXC guest config - on the host). Note though that there are some potential security implications for that as it will give the guest write access to /proc and /sys (perhaps other system dirs?).
For a list of the ones released as v16.0 (to date) please see this list (in part 3 of the v16.0 release blog post series). Obviously that link won't age well, so please be sure to check for newer release announcements in the blog section of the website, or more specifically via blog tags such as "v16.x", "release" or "stable".
Yes it would be great!
That certainly sounds pretty cool! If you are already using Ubuntu and LXD and want local VMs & containers, then I could certainly see the appeal. Although FWIW personally, I've been using Proxmox for years (which also supports both KVM and LXC - and has TurnKey LXC images integrated OOTB).
So I agree that it would be great Brian, although unfortunately we just don't have the bandwidth to prioritise this right now. Or I anticipate anytime soon TBH. We have a "todo" list a mile long and new things are constantly getting added, so often great ideas (such as this one) keep getting pushed down.
If it were possible to get our images listed "officially" (e.g. like they are within the ProxmoxVE) without needing to completely redo our build process, that might make it more appealing. Being "officially" listed by default in LXD would raise awareness about TurnKey; giving it a double advantage (i.e. support existing TurnKey & LXD users preferred usage platform; plus bring in new TurnKey users).
Unfortunately though, that is highly unlikely as all the "official" images are built using a completely different paradigm to what we use to produce our images (the "official" images are built from scratch, ours are built from our ISOs).
By my understanding we could provide our own infrastructure and another new build type. We're not necessarily against another new build type, but currently we don't have the bandwidth to support further infrastructure beyond our image mirror (FWIW Proxmox leverages that). If there was some way that we could easily be hooked into LXD (similar to Proxmox) and all we needed to supply was the mirrored images, then that might be possible?!
I'd love to say that we'd totally support community development of this integration of LXD. We completely do in theory, but I can't even promise the bandwidth to review and merge 3rd party code for something so significant at this point. Although if someone can provide PoC (proof of concept) then perhaps we could quarantine some time to look a bit closer?
Hopefully once we get v16.0 out the door, we might have time to come up for air before we need to turn around and start again on v17.0... But even then, I'm not sure that this idea would be high enough priority for us to put much energy into. Also, note that Debian testing/Bullseye; what v17.x will be based on, is going to have it's first freeze in just over 6 months! After running so far behind schedule with v16.x, I'm really keen to start work on v17.x before Bullseye gets too far along the development cycle!
So basically, we're flat out running to keep in the same place ATM. I'm not really sure what the answer to that is, but ATM I need to keep running. I hope this post doesn't discourage you too much Brian, but I wanted to be really realistic about it. I'm more than happy to discuss this idea further. And if anyone can see flaws in my logic (showing that it's easier than I think) and/or an alternate paths that could achieve the ends of making TurnKey "just work" on LXD, I'd love to hear!
So in summary, whilst it's an awesome idea and a fine goal, bottom line, I don't see this happening anytime soon... Sorry to be so crushing... :(
WOW
Its really a superb new, so excited to see making it possible to run multiple isolated "containers" on a single host.
Pages
Add new comment