John Carver's picture

At my home office, I've been doing TurnKey development work on a Dell PowerEdge 2950 server running ProxMox VE software.  It has served me well for several years, but when I decided to take the show on the road, it was not a practical solution for development.  I started searching for a way to do development on my Dell Inspiron laptop without the burden of running VirtualBox.   I liked what I saw in LXD, the second version of Linux Containers (LXC).  I became convinced it was possible, but found there were significant challenges as discussed in this forum thread.

In the interest of getting more people involved in the development of TurnKey GNU/Linux appliances, I'm sharing the results of my work on GitHub at https://github.com/Dude4Linux/turnkey-pde. Setting up the development environment on a Ubuntu 16.04 laptop or workstation is as simple as cloning the project and running the included pde-setup script.

Make a user directory for development work, or use one you already have.

$ mkdir -p ~/devops
$ cd ~/devops

Clone the TurnKey PDE from github.

$ git clone https://github.com/Dude4Linux/turnkey-pde.git
$ cd turnkey-pde

Run the PDE installation script.

$ ./pde-setup

Note that the script is run in user mode.  The user must have sudo privileges and you will be prompted for the sudo password when needed.

Examples of using the development environment and setting up a TKLdev container can be found in the README.md.

Most TurnKey appliances will run just fine in a user-space, non-privileged container.  The TKLdev and LXC appliances require special configuration.  TKLdev must run in a security.privileged container while the LXC appliance needs security.nesting enabled.

I hope I have identified all of the necessary components and configurations.  If you experience problems running the tests, please open an issue on GitHub.

Forum: 
Peter C. (Benchwork)'s picture

This is fantastic, I will be testing this out soon!!

John Carver's picture

Peter, I appreciate your willingness to help with testing.  As with most new projects, I've found some issues with the initial release.  Most of the simpler appliances like core, lamp, etc will run in the PDE just fine, but more complicated appliances will not be fully functional.

While working on the v1.1 release, I've apparently made a change which breaks the dnsmasq setup.  I've seen this before, but never pinned down the cause or the fix.  New containers don't properly register with dnsmasq.

If you haven't yet installed the PDE, I recommend that you checkout the v1.0 version before running pde-setup.

$ git clone https://github.com/Dude4Linux/turnkey-pde.git
$ cd turnkey-pde
$ git checkout v1.0

Let me know of any problems you encounter.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Thanks Simon, I haven't seen any suggestion that LXC/LXD are using virto drivers but there may be a connection.

Does anyone know the purpose of 01ipconfig in firstboot.d?  I know what it does, but I don't know why.  Can't remember seeing it before.  I think I've found a race condition.  LXC & LXD create a /etc/network/interfaces file with the hostname set to the container name. When the container is started, dnsmasq picks up the container name from the initial dhcp request. Meanwhile, the first thing the init sequence does is run 01ipconfig which replaces /etc/network/interfaces with a new file without host or container name.  If it finishes first, the resulting container will receive an ip lease, but dnsmasq will not know the name of the container.  That is what I've been seeing.

1520725192 00:16:3e:81:1b:a3 10.76.85.138 mycontainer 01:00:16:3e:81:1b:a3
1520725192 00:16:3e:2a:00:6e 10.76.85.227 * 01:00:16:3e:2a:00:6e
1520725191 00:16:3e:a8:b9:62 10.76.85.136 mautic 01:00:16:3e:a8:b9:62

Here * should be core-test.  I think it is safe to remove 01ipconfig, but I'd like to know why it is there in the first place.  

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

If you browse to the history of 01ipconfig on GitHub (and click on the ellipses next to the commit message, to expand the full message) it says:
this adds a hook to preseed an image with a static ipv4 config

in case you want to use this script you should set
eth0 in your /etc/network/interfaces to manual.

I haven't heard from Peter for ages now, but he was a developer with a hosting partner who contributed quite a bit of code a few years ago. I don't recall the exact context of him adding it, but it was merged by Alon so I assume that he must have thought it was a good idea...

If this behaviour is causing issues for LXC/LXD containers there are a few ways we could tackle it. We could disable that script by default (i.e. chmod -x /usr/lib/inithooks/firstboot.d/01ipconfig) for all LXC builds (in buildtasks), or we could do it within the LXC appliance when the template is first set up (before firstboot).

FWIW, I haven't encountered any issues with that on Proxmox (LXC) and seeing as that is the primary aim of our current LXC build, I'm inclined to do it on the LXC appliance itself.

Thoughts?

John Carver's picture

Thanks Jeremy, When I first read the code, I thought it was rewriting /etc/network/interfaces for all cases. I reread it earlier today and saw that happens only if  $IP_CONFIG is defined. I assume this allows overriding the network config by defining variables in inithooks.conf.

I was able to figure out why some test containers weren't registering their name with dnsmasq.  Turns out that udhcpc will include the hostname when it requests an ip address if the hostname is defined in /etc/network/interfaces, but it must be in the same stanza as the iface line. Tacking it on at the end doesn't always work. I'll add some examples tomorrow.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Most, if not all, Turnkey appliances don't include the hostname in /etc/network/interfaces.  In order to get containers to properly register with dnsmasq, we need to add the hostname where udhcpc can find it when it makes the initial request for an IP address.  Appending it to the end of /etc/network/interfaces worked for most cases, but those with multiple interfaces failed.

For example 

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

auto eth1
iface eth1 inet dhcp

hostname core-test

won't work because the hostname is associated with the second interface, while

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp
hostname core-test

auto eth1
iface eth1 inet dhcp
hostname core-test

works as desired.  I presume that the hostname needs to be specified for each interface that uses dhcp.  The LXC config should look like this:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet manual

auto br0
iface br0 inet dhcp
    bridge_ports eth0
    bridge_fd 0
    bridge_maxwait 0
hostname lxc-test

auto natbr0
iface natbr0 inet static
    bridge_ports none
    bridge_fd 0
    bridge_maxwait 0
    address 192.168.121.1
    netmask 255.255.255.0

I'm not sure what the impact of adding this upstream to all appliances, but I think it would be helpful.  I know I have issues getting VM's in Proxmox running in bridge mode, to register with my home dnsmasq server. There may be another way to tell udhcpc what hostname to use, but I haven't found it yet.

I'm working on making lxc-turnkey handle the /etc/network/interfaces more intelligently by parsing the file and using the information to adjust the container config.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Perhaps it's worth noting as an issue on the tracker? Although TBH, we're so far behind schedule for v15.0 release, unless it's security related, or is a bug in the appliance itself, I'd rather not include anything too much more by default in v15.0 appliances. I'd much rather it just be done on the LXC appliance, at least for the next release.

Once v15.0 is out and I can relax a little more, then we can look at it a bit more closely and aim to include it in v15.1. That way we can do some more rigorous testing to ensure it doesn't introduce any regressions.

Does that sound fair and reasonable to you?

Jeremy Davis's picture

Re virto drivers, I'm almost certain that there is no connection with LXC/LXD. AFAIK virtio is virtual hardware explicitly (and exclusively) provided by/for KVM. As LXC/LXD leverages the host kernel, there are no need for drivers within an LXC/LXD container.

Having said that, there may still be some interesting interactions and/or unexpected consequences (that I am unaware of and haven't struck yet) when running LXC/LXD within a KVM VM?!

John Carver's picture

Hi Peter,  Did you ever get a chance to try the TurnKey PDE?  If you did, be sure to read my last post on this thread about updating for LXC/LXD 3.0.1.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Thanks for sharing your work John. I'm sure this will be useful for others. We'll certainly look at how we might be able to integrate some of this for v15.0 if possible. As per usual, we'll focus on ISOs first, we actually have ~75% of the library ready to build once we have a stable v15.0 Core. Then we'll make sure that the other builds are up to scratch. Hopefully that should all go fairly smoothly.

Once we've got to that point, then we can start looking further afield to things such as adding some more new appliances, and other builds, such as LXD.

John Carver's picture

When working remotely with a limited bandwidth available, it is important to minimize as much as possible the repeated downloading of Debian packages via Apt. One way of doing this is to add an Apt Cache Proxy to the host running the TurnKey GNU/Linux Portable Development Environment (PDE). On the other hand, we wish to avoid multiple caches of the same file. The LXC appliance caches downloaded proxmox formated images. The TKLdev appliance, by default, uses Polipo to cache all downloaded files including deb packages.

1) Choosing an Apt Cache Proxy

a) squid-deb-proxy

  • Installs squid3 and sets up two proxies, one for HTTP and one for Apt
  • Could not get the apt proxy to accept PPA or TurnKey packages

b) polipo

c) apt-cacher-ng

  • Next generation replacing apt-cacher
  • This is the package we choose to use

2) Install apt-cacher-ng

sudo apt-get -qy update
sudo apt-get -qy install -t xenial-backports apt-cacher-ng

3) Create 01proxy and Install on all clients

Set the proxy for host to localhost.

echo "Acquire::http { Proxy "http://127.0.0.1:3142"; };" | sudo tee /etc/apt/apt.conf.d/01proxy

Set the proxy for containers to the LXD bridge interface.

PROXY=$(lxc network get lxdbr0 ipv4.address)
echo "Acquire::http { Proxy "http://${PROXY%/[0-9]*}:3142"; };" > 01proxy

For each container, push the 01proxy file and restart apt if the container is running

for container in $(lxc list --format=csv -cn); do
lxc file push 01proxy ${container}/etc/apt/apt.conf.d/01proxy --uid=0 --gid=0
done

4) Ensure clients only use http:// urls in source lists.
apt-cacher-ng refuses to cache https:// urls.

5) Configure apt-cacher-ng to pass through HTTPS requests
Add the following line to /etc/apt-cacher-ng/acng.conf

PassThroughPattern: .* # this will allow CONNECT to everything including HTTPS

and then restart

sudo service apt-cacher-ng restart

6) Configure firewall to allow containers to access apt-cacher-ng

sudo ufw allow in on lxcbr0 to any port 3142 proto tcp
sudo ufw allow in on lxdbr0 to any port 3142 proto tcp

7) The TKLdev appliance needs some additional configuration
Change the FAB_APT_PROXY in the container /root/.bashrc.d/fab to use apt-cacher-ng.
Replace 10.76.85.1 with the PROXY address from step 3. 

export FAB_APT_PROXY=http://10.76.85.1:3142

Leave the FAB_HTTP_PROXY pointing to polipo on the localhost

export FAB_HTTP_PROXY=http://127.0.0.1:8124

Note: Edited 02/21/2018 to use 01proxy which may exist in some appliances and to push files as root:root.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

We probably should consider including apt-cacher-ng in the LXC appliance too perhaps? And if your testing suggests that it provides a better experience than polipo for package caching, then perhaps we should consider using that in TKLDev too?

Regarding TKLDev and polipo, perhaps we should consider replacing the generic http proxy bit with squid in the near future? It's also worth revisiting the idea of supporting https caching in TKLdev (essentially via a MITM cache mechanism, i.e. the https connection terminates at the proxy).

John Carver's picture

@Jeremy, I think adding apt-cacher-ng to the LXC appliance would be a good idea.  Since I'm already working on LXC 15.0, I'd be happy to include this.  I've been thinking about how to this while allowing it to be overridden by pre-seeding inithooks.conf. Would propose a setup similar to how TKLdev handles polipo.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

apt-cacher-ng was added to the LXC appliance by Alon back in version 13.0.  When I was thinking of caching in LXC, I was thinking of the image caching that LXC does.  Completely forgot that it was also caching apt packages.  All we need to do is arrange the configs so that if LXC v.1 is running in an LXD v.2 container it will use the LXD cache, rather than it's own.  Basically apt-cacher-ng on the host should take priority so that all containers and nested sub-containers will use the top level cache.  I'll see what I can work out.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Ah ha! Shows how much attention I was paying too! TBH, I too missed that apt-cacher-ng was already installed! Doh!

Good luck with it mate! :)

John Carver's picture

If you have been using TurnKey PDE (portable development environment) on Ubuntu 16.04 LTS, then you have probably experienced the warning about needing to run a partial upgrade due to package conflicts.  The problem apparently arises from our use of LXC and LXD packages from `xenial-backports`.  It's taken some time to figure out a remedy, but I have released a new version of TurnKey PDE which deals with the introduction of version 3.0.1 of LXC & LXD that were released to `xenial-backports`.

You can clone the new version, turnkey-pde v1.2, from github or if you have previously cloned an earlier version, use `git pull origin master`.  Then run `./pde-setup` for a new installaion or `./pde-setup --update` to update an older version.

In the event that you ran the partial upgrade, you have probably found that the old 2.x versions of LXC & LXD were removed, but the installation of the new versions failed.  Not to worry, if you now run `./pde-setup --update`, the necessary changes will be made and the new versions of LXC & LXD will be installed.  Any containers you created with the old version should be updated and working.  Of course, it's preferable to run `./pde-setup --update` before performing the partial upgrade.  You can just cancel the partial upgrade and `./pde-setup --update` will take care of the rest.

As always, please report any problems by opening an issue on GitHub.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Sounds like your PDE is maturing nicely! :)

Add new comment