Jeremy's picture

I've just discovered Turnkey and LXC together and I'm excited about what it can do.

I'm unclear about how much of the host applications the container sees and can utilize.  Does it only see the kernel and resources?  Or can it see and use other applications and services from the host as well.

For example if I install MySQL on the host, do I also need to install it in the container or merely configure it in the container with its own config files?



Jeremy Davis's picture

Obviously they are a little more tightly integrated with the host OS than a 'true' VM (e.g. VMware, VirtualBox, Xen, KVM, etc) but basically they only see the host kernel (which it treats as if it were it's own, but only has limited access too).

So you can share MySQL from host to guest, but just like any other virtualisation situation (or hardware) you will need to do it via the network. Depending on how you set it up, you can create a LAN like internal network so that network traffic isn't exposed outside your host machine though if you desire...

Great name BTW! :)

Jeremy's picture

HI!!  I just learned about this project today from hearing your interview on FLOSS Weekly!

Excellent work!

Thanks for responding.  That sounds like it answers my question...  If I use LXC to create 3 differnet containers on a single machine then everything (except kernel) will need to be installed in each container. 

Can I assume that when I create a container using a Turnkey template that it only installs the minimum necessary?


p.s. Yes - Great Name!!

Jeremy Davis's picture

Nice one!

Yes, everything you want within the container will need to be installed on each one.

Essentially yes, but it depends on your desires. E.g. all TurnKey appliances have Webmin and Webshell installed. I rarely use either of them and for my own purposes would generally consider them 'non-essential' however many Linux new comers really appreciate them. I never bother uninstalling them - they seem to use minimal resources...

Also one of the most awesome thing about using containers is that they are so light on resources to start with...! Even with the redundancy of having stuff installed multiple times i.e. once in each container. You will find that the resource overhead on a container VM is nothing compared to a 'true' VM. That's been my experience with OpenVZ anyway (OVZ is another container virtualisation solution - which is also supported by TurnKey).

Hans Harder's picture

Of course, you can also run MySQL in a separate container instead of running on the host machine.

By using a bridge as an internal network for the containers you can connect to each service in the containers or host and at the same time shield those services/ports of the containers from the outside.

The idea is to be not dependent on the host or hostsoftware versions..   In my case the host I am running on with my containers is managed by a different group of people and I don't have rights on the OS..

While in the container, I have root rights and I can independently from the host OS  have my own toolchain/libraries/utilities/services.

The only dependentcy with the host OS that I have, is that it should be some flavor of Linux with support of cgroups.

The only negative thing now about LXC is the shared filesystem it uses with the host.  OpenVZ has a nice solution for that called ploop which I hope in the future is coming as a generic solution to the linux kernel.  (see:





QUOTE:  ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol

Jeremy's picture

When I first got introduced to containers, I had hoped it mean shared binaries across all the containers.  That would really reduce the footprint of each container (as opposed to installing MySQL into each one).  And it would mean that patching the binary on the host would also "patch" the binaries in each container without having to do separate maintenance.  Or if I install an application I use on the host, it would automatically be available to use in each container.  Each container could maintain and use it's own /etc and other configs and feel like it is its own private host.  

Is there some solution out there that does this on the shared binary side?  Or is that just not feasible?



Jeremy Davis's picture

Docker is probably a little closer to what you are after (although not necessarily the current TKL implementation) but still more independent/abstracted from the host OS than what you are discussing...

You could probably mix up your own thing using a combo of chroots and UnionFS/AuFS/OverlayFS although TBH I'm not sure whether it would be worth it...

FWIW TurnKey appliances automagically take care of the main regular maintenance task (auto security updates) and can be easily configured for daily auto backups (TKLBAM). As for the appliance footprint, my testing server at home (which is ~8yo desktop hardware - Core2Duo and 8GB DDR2 RAM) has about 25 (various TurnKey) OpenVZ containers and 5 (4 TurnKey & 1 Win7) KVM VMs running happily. On that a TKL LAMP server is idling at 0.2% of one CPU and using less that 150MB RAM (obviously under load it would use much more but still...). That's not much of an overhead...! When you are logged into the commandline they are as responsive as you'd expect bare metal (single CPU @ 2.3GHz & 512MB RAM).

At a place I do IT support for (I'm sitting there now...) they have 3 TKL LAMP servers (OVZ), 2 TKL CakePHP servers (OVZ), a (custom TKL based) SMTP relay server (OVZ), a DHCP/DNS/PXE server (OVZ), a (custom TKL based) NFS filestore (KVM), a Windows 2008R2 AD and app server (KVM), a Win7 VM (KVM) and a WinXP VM (KVM) all running simultaneously. Admittedly it's not a large organisation and doesn't have a ton of traffic (and most of the servers are used internally) but it's all running on desktop hardware (2yo i7, 16GB RAM) and in the last 24 hours it peaked at ~25% CPU and ~12GB RAM. Right now (with just me here and everything idling) it's sitting at ~2% CPU and ~8GB RAM (I think the Win machines tie up most of the RAM whereas the OVZ machines release it when not in use). It also has 15GB of swap allocated and none in use... And all of that including host OS has a HDD footprint of about 700GB (over 1/3 of that is Server 2008R2 alone).

Now obviously that's OpenVZ containers but I imagine LXC would be pretty similar (they more-or-less work on the same principle). IMO unless you have some massive loads or requirements that containers can't fulfill then they (containers) are a no-brainer! And despite the duplication, personally I like the redundancy. On occasion when there have been problems I will just launch a new server (on the same host) and restore the last backup while I troubleshoot what is wrong with the original server. Also using TKL, even if there is a hardware failure I can launch VMs through the Hub (on AWS) and restore my TKLBAM backups while I do hardware repairs. I haven't yet done that (I haven't needed to) and obviously it only applies to the TurnKey VMs but it's a nice piece of mind! :)

Sorry for the rantiness of my post but I think that you should try running some containers and do some testing before you bother trying to optimise too much, You know what they say... Premature optimistaion is the root of all evil!

Hans Harder's picture

I agree with Jeremy.

The only thing multiple lxc containers now cost is some extra HD space...

Like Jeremy whenever I have a problem or wanna test something I just clone (tar) a running lxc container and start it up on my laptop in a vm with TKL-lxc...

All the updating of the  containers with security patches is already done with TKL.

We even have 2 TKL Lamp lxc containers which have custom applications which are synced with an TKL svn container. The only difference between those 2 LAMP containers is that we modified the time when the security patches are applied, so we can detect problems with it in advance.


For the question underneath... no idea with docker, but it should be possible.

For lxc we made a small daemon which listens to a pipe and the host can send commands to it.. (we have an old lxc version running)

But basicly if you have sshd enabled in the container, you could use a sshkey and just from the host execute the command with ssh  (no special lxc or docker command needed).

QUOTE:  ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol

Kalpana S Shetty's picture


I have started or running "mysql_docker" container on host. Inside this container I have mysql server and installed with sysbench. From my host I would like to start mysql service that is running inside the conatiner ("mysql_docker") and even I want to run binary executable or shell script that is there inside the container but from host shell.

I have tried with "docker-exec" this does not seems to working for me while I tried with starting mysql service.

Is there a way I can execute binaries inside the container from host system.




Jeremy Davis's picture

But I would assume not by default. Part of the point of Docker is that the Docker container is like a sandbox. If you are in the sandbox playing you can go to town and make as much mess as you like without affecting the outside...

Having said that if you actually want to sidestep the separation between the container and the host then I'm sure that you could do that. I don't know of any specific Docker ways; but I'm sure any other tool that would allow you to run processes on a remote mchine would essentially do the same thing (e.g. one simple solution may be SSH!?)

Add new comment