spammy's picture

Are there any plans to make TKL images available via a LXD compatible image server?

Forum: 
Jeremy Davis's picture

TBH I wasn't aware that LXD had any image servers. But a quick google demonstrates that indeed there are! :)

Do you know much about it? Like what requirements there are to have an app listed? From what I can see the "offical" LXD library images are all built by sripts contained within a specific GitHub repo. If that's the requirement then it's too hard IMO. If there is some other mechanism then it may well be possible...

Another thought is that perhaps if we can understand the requirements of a "LXD library" we could create our own?

spammy's picture

>Another thought is that perhaps if we can understand the requirements of a "LXD library" we could create our own?

Actually... that was exactly what I was asking for :). To be honest I only know what I learned from here:

https://linuxcontainers.org/lxd/try-it/

It appears that anyone can host LXD images; it's part of what it is, similar to Git. I suspect that you should be able to present images even more transparently than you do for eg OpenVz. In fact it'd probably be preferable for you to have your own image server rather than contribute to a public one, although I'm not sure what the resource/security implications are.

Jeremy Davis's picture

TBH I don't know when we'll get chance to look into this further but I've added it to the tracker (https://github.com/turnkeylinux/tracker/issues/670) so we don't forget about it. TBH though unless someone from the community looks into it further I don't imagine that it will happen any time soon...
spammy's picture

Exactly what I had in mind, but I didn't quite grasp the fuller detail enough to present it. Thank you for the elaboration.

Jeremy Davis's picture

That's great! It does indeed appear to be fairly straight forward.

My only reservation is that it sounds like we will need an additional (LXD) server to host the images. Obviously that in and of itself is not a big deal. But then it's another server that needs to be maintained. The "weird" Apache config (that pretends to be an LXD server - as noted by Stephane) sounds like more what we'd want. Then we could just add an additional vhost to our existing webserver (i.e. for this website) that provides the images direct from our mirror. FWIW I just posted on the GitHub thread.

Regardless it's certainly on our todo list. Unfortunately though our todo list is very long so I have no idea when we might get to it.

Jeremy Davis's picture

And has provided some alternate info that sounds like it should be really easy. We'll still need to have a play with it and configure our webserver to use it, but we should be able to script the relevant info and generate the json files automatically.

I'm still not sure when we'll get to it, but it sounds doable. If one of you guys wants to have a play with it that'd certainly push things forward, although I can't guarantee when we'd be able to implement it. The less we need to do ourselves and the easier it is for us to implement, the sooner we can roll it out...

John Carver's picture

Hi Jeremy, I took a look at Stephane's suggestions and made an attempt to adapt a 14.1 lxc image to create an lxd container running on Ubuntu Xenial. I used the debian jessie files from the links Stephane provided and adapted them for a TurnKey appliance. (see the post below).

In addition to the two json files that Stephane described, you will also need to create a tarred metadata file with accompanying templates. LXD uses .xz compression by default, but gzip will also work. By convention, linuxcontainers.org uses lxd.tar.xz for the tarred metadata and templates, keeping all images in separate folders. I chose to use the same name as the original image except inserting a .lxd so that all files can be stored in the same folder.

.
├── debian-8-turnkey-tkldev_14.1-1_amd64.lxd.tar.gz
├── debian-8-turnkey-tkldev_14.1-1_amd64.tar.gz
├── metadata.yaml
└── templates
    ├── hostname.tpl
    └── hosts.tpl

The metadata file must be named
metadata.yaml

{
    "architecture": "x86_64",
    "creation_date": 1460431832,
    "properties": {
        "architecture": "x86_64",
        "description": "TKLDev - TurnKey Development Toolchain and Build System (20160411)",
        "name": "turnkey-tkldev-14.1-jessie-amd64",
        "os": "turnkey",
        "release": "jessie",
        "variant": "14.1"
    },
    "templates": {
        "/etc/hostname": {
            "template": "hostname.tpl",
            "when": [
                "create",
                "copy"
            ]
        },
        "/etc/hosts": {
            "template": "hosts.tpl",
            "when": [
                "create",
                "copy"
            ]
        }
    }
}

The accompanying template files will probably be the same for all appliances.  The supplied templates simply set the container.name into /etc/hostname and /etc/hosts whenever the container is created or copied. It's possible we could create other templates to perform additional functions.

templates/hostname.tpl

{{ container.name }}

templates/hosts.tpl

127.0.0.1   localhost
127.0.1.1   {{ container.name }}

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

Stephane makes mention of two ways to implement the lxd repo, one using a customized apache server and the other using simplestreams.  I couldn't figure out if simplestreams was a requirement for second method or if it was just a convenient way to create and manage the json files.  There doesn't seem to be a package for simplestreams on Debian.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

TBH this is really high on our agenda. I think it would be really awesome to provide, but not a priority for us ATM.

Still fantastic info you've shared here! Hopefully we'll get a chance to come back to this and give it some love! :)

bmullan's picture

I know Turnkey builds LXC (lxc1) containers but there is a good external thread recently about "how to" convert original lxc1 to lxc2 (re LXD) containers.

https://lists.linuxcontainers.org/pipermail/lxc-users/2016-July/012000.html

Fajar Nugraha is one of the LXD developers and per his last statement on that thread:

There's a script to convert lxc -> lxd somewhere on this list, but I usually do things manually: 
(1) create a container in lxd. Start it, stop it, then look at its uid mapping (i.e. "which u/gid owns /var/lib/lxd/containers/container_name/rootfs") 
(2) use fuidshift with "-r" to shift your lxc container u/gid back to privileged, using the starting u/gid value in your original lxc config (should be 951968) 
(3) use fuidshift again, but this time without "-r", to shift your lxc container to unprivileged, using the starting u/gid value from (1) 
(4) move your new lxd container's original rootfs somewhere else (or delete it if you want), then replace it with rootfs from (3) 
(5) start your lxd containers 

So if anyone has the interest to migrate an LXC container to LXD so you can then take advantage of LXD to manage it locally or remotely, use CRIU (live migration) and some of the other nice features LXD added to the original LXC (lxc1)

The "script" that Fajar refered to may be this one:

https://github.com/lxc/lxd/blob/master/scripts/lxc-to-lxd

Brian

 

John Carver's picture

I recently started looking at LXD as an alternative to Proxmox for a development environment. I simply can't lug my home development server when I hit the road.  I'm looking for a way to run multiple TurnKey images on my laptop for a self-contained development environment. I tried using LXC for this purpose a few years ago, but never quite got it to work. The good news is that LXD is compatible with the TurnKey images for Proxmox 4.x. 

Following are my user notes from testing on a laptop running Ubuntu 16.04 (Xenial).

LXD Users Guide

LXD is designed to work alongside LXC and provide a simpler user interface for managing LXC containers.

Installation:

$ sudo apt-get install lxd lxd-tools
$ newgrp lxd 

Initialize LXD:

Run the following command, selecting the default settings, except for the creation of an IPv6 network.

$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=dir]:
Would you like LXD to be available over the network (yes/no) [default=no]?
Do you want to configure the LXD bridge (yes/no) [default=yes]?
Warning: Stopping lxd.service, but it can still be activated by:
  lxd.socket
LXD has been successfully configured.

Working with LXD:

$ lxc image list
+-------+-------------+--------+-------------+------+------+-------------+
| ALIAS | FINGERPRINT | PUBLIC | DESCRIPTION | ARCH | SIZE | UPLOAD DATE |
+-------+-------------+--------+-------------+------+------+-------------+
$ lxc image copy lxc-org:/debian/jessie/amd64 local: --alias=jessie-amd64
Image copied successfully!
$ lxc image list
+--------------+--------------+--------+----------------------------------------+--------+---------+-----------------------------+
|    ALIAS     | FINGERPRINT  | PUBLIC |              DESCRIPTION               |  ARCH  |  SIZE   |         UPLOAD DATE         |
+--------------+--------------+--------+----------------------------------------+--------+---------+-----------------------------+
| jessie-amd64 | 06b11f7b270f |   no   | Debian jessie (amd64) (20170104_22:42) | x86_64 | 94.05MB | Jan 5, 2017 at 5:40pm (UTC) |
+--------------+--------------+--------+----------------------------------------+--------+---------+-----------------------------+
$ lxc launch jessie-amd64 jessie-01
Creating jessie-01
Starting jessie-01
$ lxc list
+-----------+---------+---------------------+------+------------+-----------+
|   NAME    |  STATE  |        IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+---------------------+------+------------+-----------+
| jessie-01 | RUNNING |                     |      | PERSISTENT | 0         |
+-----------+---------+---------------------+------+------------+-----------+
$ lxc stop jessie-01 --force
$ lxc list
+-----------+---------+---------------------+------+------------+-----------+
|   NAME    |  STATE  |        IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+---------------------+------+------------+-----------+
| jessie-01 | STOPPED |                     |      | PERSISTENT | 0         |
+-----------+---------+---------------------+------+------------+-----------+
$ lxc delete jessie-01
$ lxc list
+-----------+---------+---------------------+------+------------+-----------+
|   NAME    |  STATE  |        IPV4         | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+---------------------+------+------------+-----------+

Importing TurnKey Image:

The goal is to be able to import and run TurnKey images under LXD. Fortunately the 14.x images created for Proxmox LXC format are compatible with LXD. Following the examples by Stéphane Graber, we must manually (for now) create some metadata to accompany the TurnKey image. First download the desired image from the proxmox directory on mirror.turnkeylinux.org, for example debian-8-turnkey-tkldev_14.1-1_amd64.tar.gz. Try to use a method such as rsync that preserves the original creation/modification dates.

$ rsync -av -P rsync://mirror.turnkeylinux.org/turnkeylinux/images/proxmox/debian-8-turnkey-tkldev_14.1-1_amd64.tar.gz ./

Determine the creation_date from the last modified date of the image file.

$ stat -c%y debian-8-turnkey-tkldev_14.1-1_amd64.tar.gz 
2016-04-11 22:30:32.000000000 -0500
$ stat -c%Y debian-8-turnkey-tkldev_14.1-1_amd64.tar.gz 
1460431832

20160411 date the image was created. 

1460431832 date and time the image was created in epoch format. 

architecture is x86_64 for an amd64 image. amd64 may also work for the architecture.

Create a subdirectory "templates" and the following three files in yaml format. Change the name, description, etc. corresponding to the appliance image.

metadata.yaml

{
    "architecture": "x86_64",
    "creation_date": 1460431832,
    "properties": {
        "architecture": "x86_64",
        "description": "TKLDev - TurnKey Development Toolchain and Build System (20160411)",
        "name": "turnkey-tkldev-14.1-jessie-amd64",
        "os": "turnkey",
        "release": "jessie",
        "variant": "14.1"
    },
    "templates": {
        "/etc/hostname": {
            "template": "hostname.tpl",
            "when": [
                "create",
                "copy"
            ]
        },
        "/etc/hosts": {
            "template": "hosts.tpl",
            "when": [
                "create",
                "copy"
            ]
        }
    }
}

templates/hostname.tpl

{{ container.name }}

templates/hosts.tpl

127.0.0.1 localhost
127.0.1.1 {{ container.name }}

# The following lines are desirable for IPv6 capable hosts 
::1     ip6-localhost ip6-loopback 
fe00::0 ip6-localnet 
ff00::0 ip6-mcastprefix 
ff02::1 ip6-allnodes 
ff02::2 ip6-allrouters

Create a tar file of the metadata and templates for LXD. The name could be anything, but I chose to name it similar to the original image.

$ tar -czvf debian-8-turnkey-tkldev_14.1-1_amd64.lxd.tar.gz metadata.yaml templates/*

Now import the image into LXD.

$ lxc image import debian-8-turnkey-tkldev_14.1-1_amd64.lxd.tar.gz debian-8-turnkey-tkldev_14.1-1_amd64.tar.gz --alias turnkey-tkldev_14.1_amd64
$ lxc image list
+---------------------------+--------------+--------+--------------------------------------------------------------------+--------+----------+-----------------------------+
|           ALIAS           | FINGERPRINT  | PUBLIC |                            DESCRIPTION                             |  ARCH  |   SIZE   |         UPLOAD DATE         |
+---------------------------+--------------+--------+--------------------------------------------------------------------+--------+----------+-----------------------------+
| turnkey-tkldev_14.1_amd64 | 20c8484e42bf | no     | TKLDev - TurnKey Development Toolchain and Build System (20160411) | x86_64 | 197.68MB | Jan 6, 2017 at 6:48pm (UTC) |
+---------------------------+--------------+--------+--------------------------------------------------------------------+--------+----------+-----------------------------+

Next you can create (launch) a container running the image.

$ lxc launch turnkey-tkldev_14.1_amd64 tkldev
Creating tkldev
Starting tkldev
$ lxc list
+-----------+---------+---------------------+------+------------+-----------+
|   NAME    |  STATE  |     IPV4            | IPV6 |    TYPE    | SNAPSHOTS |
+-----------+---------+---------------------+------+------------+-----------+
| tkldev    | RUNNING | 10.76.85.211 (eth0) |      | PERSISTENT | 0         |
+-----------+---------+---------------------+------+------------+-----------+

Caveats:

Notice that launch both creates and starts the container unlike LXC which had separate commands. There is no method for loading an inithooks.conf file, but the init sequence creates one on the fly with a random password. Since there is no /dev/console, the turnkey-init is not triggered and must be run manually. Connect to the container using the 'exec' command.

$ lxc exec tkldev bash
root@tkldev ~# turnkey-init
...

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Hi Brian. I didn't see your preceding post because I was busy composing my post. :)

I think Fajar's script addresses a different use case, i.e. converting or moving an existing lxc1 container to an lxd container. My effort was in exploring how to use an existing proxmox (lxc1) rootfs image to create the lxd container. Admittedly I'm just learning to use LXD, so I may not be aware of all the possibilities. I haven't tried to use a 13.x (openvz) image with LXD but I suspect it would not work. I think you would first have to create a lxc1 container using the lxc-turnkey template, and then use Fajar's script to convert to lxd format.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

Brian, I've run into a problem trying to run TKLdev appliance in a LXD unprivileged container. Inside the container, mount is being blocked by AppArmor even though I'm root. I've been looking for an example of how to create a profile that allows mount to operate within the container but not outside for security. I haven't found anything helpful so far. Do you have any ideas?

mount: permission denied
fatal: non-zero exitcode (32) for command: mount -t aufs -o 'dirs=/turnkey/fab/bootstraps/jessie,udba=reval'  'none'  'build/bootstrap'
/usr/share/fab/product.mk:476: recipe for target 'build/stamps/bootstrap' failed
make: *** [build/stamps/bootstrap] Error 1

dmesg shows a number of these errors.

[360426.345856] audit: type=1400 audit(1483730835.496:72): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-tkldev_</var/lib/lxd>" name="/sys/fs/pstore/" pid=7328 comm="mount" flags="rw, nosuid, nodev, noexec, remount, relatime"
[471499.216660] audit: type=1400 audit(1483841915.104:76): apparmor="DENIED" operation="mount" info="failed flags match" error=-13 profile="lxd-tkldev_</var/lib/lxd>" name="/" pid=25654 comm="mount" flags="ro, remount, relatime"

Information is free, knowledge is acquired, but wisdom is earned.

bmullan's picture

John..  you are correct re "I think you would first have to create a lxc1 container using the lxc-turnkey template, and then use Fajar's script to convert to lxd format."

 

Junaid Shahid's picture

yeah he is right, I have tried that and it's worked.. !

Junaid Shahid's picture

Hey thanks, I have check this, that's really cool :)

John Carver's picture

After I completed work on the 14.2 updates for the LXC appliance, I turned  my attention back to LXD.  I soon found that although images created using the process I outlined above initially worked, there were nagging issues like not being able to stop a running container or use mount within a running container.   After far too much experimentation, it finally hit me that some of the issues were caused by not applying the container patches included in the lxc-turnkey template and others such as mounting in an unprivileged container required the additional changes outlined by Brian Mullan.

My first thought was to just install LXD on the 14.2 LXC appliance and script the conversion process. To my surprise, I found that development of LXD for Debian is lagging and it likely won't be ready for the upcoming release of stretch. That means a fully supported LXD appliance would have to wait until TurnKey 16.0 release (Debian 10). Stéphane Graber has outlined an alternate procedure for installing LXD on Debian stretch using snaps.  If Jeremy, Alon, and Liraz are agreeable, this might be a path to develop a 15.0 LXD appliance.

In the meantime, I plan on scripting the download and conversion on my Ubuntu laptop because I really want to have a portable development environment.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Your suggestion of using an LXD snap to provide a v15.x LXD host appliance sounds like a legitimate plan to me. Obviously I'll need to discuss it with the other guys, but on face value I don't see why they wouldn't be open to that.
bmullan's picture

John

I just ran across this and you might find it interesting in regards to migrating LXC (v1) containers to LXD (lxc v2 containers)...

thread about script to migrate LXC v1 containers to LXD (lxc v2) containers

 

John Carver's picture

Thanks for the link to the lxc-to-lxd script. Actually I had found it earlier in my research and have had some good success in using it to create LXD images.  I've been planning to write a new post about how I created a TKLdev workstation on Ubuntu 16.04 using LXC and LXD.  The process starts by using lxc v1 to download a TurnKey Proxmox image to a v1 container. Some additional tweaks are performed and then the lxc-to-lxd script is used to convert to a v2 container. The normal LXD commands are then used to create a v2 image. I haven't yet written a script for the process, but that's the obvious next step.

I did have an issue with DNS name resolution which I only figured out today. I couldn't figure out why the LXD container names were not being resolved by dnsmasq as they are with the LXC containers. It finally dawned on me that the hostname used within the container defaults to the appliance name, not the container name. When the container is first started, an IP address is assigned to the initial default name.  For example, when I create a new container, linuxgeeks, from a Drupal8 image, the hostname for the container will be drupal8.lxd not the desired linuxgeeks.lxd. I had a dickens of a time trying to change the hostname and get dnsmasq to resolve properly. I think the hostname must be changed in three places, /etc/hosts, /etc/hostname, and /etc/network/interfaces; then networking restarted 'service networking restart'. The dnsmasq service on the host may also need to be reloaded. I think it may be possible to change the hostname before starting the container so the correct name is added at the start.

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

After doing some more testing yesterday, I realized that the DNS hostname resolution problem was caused by my forgetting to include metadata.yaml and associated templates before building the LXD image. I found that I needed to add three templates, one for each of the files where the hostname needs to be changed. With the templates in place, the hostname is changed to the container name as expected and dnsmasq answers DNS queries properly. Copying or renaming the container should also change the hostname.

Now to figure out a better way to handle inithooks.conf.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

And there are plans to allow the confconsole plugins to be leveraged via commandline (and therefore could be preseeded via inithooks).

Hopefully that will be ready by the time we release v15.0 but I can't promise anything...

bmullan's picture

There have been times when I found a .ISO of which I'd like to create an LXD container but in the past had to settle for using it to create a KVM VM for the application.

I finally spent some time to create a script which automatically extracts the KVM VM's rootfs and copies it to an previously created LXD container's rootfs directory.

So far this seems to work well as I've taken various .ISO from TurnkeyLinux (https://www.turnkeylinux.org/), created a KVM VM, then used the script to extract the VM's rootfs and copy it to an LXD container I'd previously prepared for it.

The script hopefully sets the UID/GID in the container correctly for use.

The script's process is fairly fast and so far has enabled me to turn up 4-5 LXD containers that originated from an .ISO file. They all seem to work okay from what I can tell.

Hopefully, I've documented the convert-vm.sh script enough so that others might improve on it and/or make use of it in the use-cases like I had.

You can find everything on Github at:

https://github.com/bmullan/Convert-VM-to-LXD-Container-RootFS/blob/maste...
 

 

Jeremy Davis's picture

Good work on that Brian, and thanks for sharing your process. We're currently working on TurnKey v15.0. We still have a couple of OS issues that I plan to resolve before we release a Core RC, but once we've done that, perhaps we can look at this again and see if we can make our default LXC builds more easily compatible with LXD.

If you have any thoughts or suggestions in that regard, they'd be warmly welcomed.

John Carver's picture

I've opened a new forum topic, TurnKey Portable Development Environment (PDE), to continue the discussion of using LXC/LXD containers for TurnKey development work.  I've been able to get TKLdev to work in an LXD (v.2) privileged container. I've also scripted the conversion from a 'proxmox' appliance image to an 'lxd' image using an LXC (v.1) container as an intermediary. Unfortunately, because of the lack of support for LXD in Debian, I've had to rely on Ubuntu 16.04 or later.  Hopefully this will change soon so that LXD can be included in the 15.0 appliances.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Great work guys.

Thanks for your work on this John. Great stuff! :)

And thanks for the hint Brian. It looks like LXD can be installed on Debian Stretch (via a snap). So once we get v15.0 out the door, it looks like providing LXD images for v15.x may be a possibility?! That would be cool!

John Carver's picture

Thanks Jeremy.  My understanding is that LXD is close to being added to Debian sid, but I'm not sure if it will ever be backported to stretch. I haven't tried using snap yet, but if you are amenable to using it as an alternative to Debian packages, then that would be the way to go.  It wouldn't be setting precedent because we already use alternative installation methods (pip for Ansible, and composer for Drupal).

My work on the TurnKey PDE, was done with an eye on folding the work into the 15.0 LXC appliance.  I put some of the changes into the lxc-turnkey template so they would be incorporated into both v.1 LXC and v.2 LXD containers.

There are two options for version 15.0. One is to update the LXC appliance and create a new LXD appliance; the other is to update LXC and add LXD to the same appliance.  There are several reasons why I favor the latter approach.

  1. An LXD appliance will still need to have LXC v.1 installed to allow converting appliance images from Proxmox format.  Eliminating LXC would require upstream support for LXD v.2 images, something I don't expect to happen for the v.15.0 release.
  2. The LXD v.2 user interface is far superior to the LXC v.1 interface. Current LXC users will probably want to use the new interface, but they will need to tools to convert existing containers to the LXD v.2 format.

That begs the question, "If we combine LXC and LXD into one appliance, what should it be called?"

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Here's my thoughts:

If LXD makes it into Sid and all is well with it (i.e. no showstopper bugs reported), then it will automatically migrate to Buster (current testing) after about a week. Once it makes it there, now we have Chris Lamb (current Debian Project Leader) involved with TurnKey, getting it backported to Stretch will be pretty straightforward and easy (assuming that there aren't be any tricky dependencies).

Having said that, if Brian's suggestion is correct (i.e. plans to deprecate deb package and only provide snap), then perhaps we'd be better installing it via snap right from the start?

As for an appliance, I think that the long term goal would be to provide an LXD appliance, with LXD images ready to run (i.e. a somewhat similar LXD experience, to the current LXC appliance experience). My guess is that once we get to that point, there would be limited value in an LXC (only) appliance as well? Although I'm not sure and am open to suggestions and thoughts on that.

Either way, I think that it would be wise to work towards the long term goal incrementally, starting with a smaller win first. So perhaps we leave the LXC appliance as is for v15.0, and produce an LXD appliance separately, at least initially? Then we could look to move the LXD image building into buildtasks for the next v15.x release of the LXD appliance (and perhaps drop the LXC appliance then?).

How does that sound?

Jeremy Davis's picture

As I noted in my post above, if LXD is moving to drop deb packages and only provide snaps, then we're probably best to go that route right from the start...

So please do post that link when you find it. Cheers.

[update] I just found this blog post which speaks about removing some of the current deb package channels. I wonder if that's what you were referring to Braan? My reading of it is that it doesn't explicitly say that they'll be moving to snap only packaging. Having said that, it's also quite an old post, so perhaps that's not the one you are referring to and there has been an update since?

bmullan's picture

bmullan's picture

in "Why a Snap"

this sentence...

Moving forward, we will be phasing out our Ubuntu PPAs and then our backports to older Ubuntu releases. Both of those use cases will be transitioned over to the snap.

 

Jeremy Davis's picture

On one hand that might be cool that it will install "anywhere", but one of the main concerns I've heard around snaps, is that you no longer control your system anymore. Developers can push updates, whether you want them or not.

For a desktop, I can see how that might make sense, but for a production server, that's the last thing I'd want happening. Especially when the software is responsible for running a fleet of LXC/LXD containers!

A quick google did turn up a thread which covers a number of my concerns. It appears that late last year there was some concession made and you can now set update windows, rather than it just update anytime it feels like. That's a good start, but unless they are backported updates (which I'm almost certain they're not) personally, I want a lot more control than that. That's why I use Linux in the first place! :)

I much prefer Debian's model of providing a specific version and backporting security patches for the life of the OS. It does mean that you may miss out on new features, but everything continues to work as it did yesterday, every day! Now that their releases are picking up pace (pretty reliably every 2 years these days) software shouldn't get too out of date. Especially once we can get our updates happening a bit quicker (i.e .v15.0 should have been out 6 months ago)!

Having said that, I'm not totally against aiming for a v15.x LXD appliance that uses the snap. Personally I'd want to keep a vanilla LXC appliance in the library if that was the case.

bmullan's picture

The developers of a snap totally control if/when a snap is updated.  Each snap is also self contained in regards to app dependencies which makes or may not be the same as what is installed on the Host OS.

 

TThe user can update or roll back a will also.  The user can also have different versions of an app installed simultaneously (re stable &  beta for instance) and run both without affecting each other's own installation.

At least those are some of the benefits as I understand them.

bmullan's picture

Jeremy

Sorry for the double post & the spelling but my previous reply was via my cell phone.

I wanted to add that whether LXD is installed via a SNAP or PPA does not affect at all the OS and the application(s) installed "inside" the container itself.. re the TurnKeyLinux Applications.

It only affects the SNAP itself .. in this case LXD.

I can copy a SNAP LXD container to a non-SNAP LXD server and that container will still run.

But perhaps I am not understanding your comment:

is that you no longer control your system anymore. Developers can push updates, whether you want them or not.

The TurnKeyLinux applications would NOT be SNAP packages themselves unless of course TurnKeyLinux creates them that way.  

So from my perspective nothing changes if LXD is SNAP or non-SNAP except LXD itself would be available on more distro's.

Brian

 

 

Jeremy Davis's picture

I deleted the double post and no worries on the spelling! :)

My response no doubt shows my level of paranoia and perhaps even a lack of understanding of snaps. But I'm pretty sure my following points are at least somewhat valid.

My concern is that if software is installed via a snap, then the developers can push software to my system (via the snap framework) whenever they like. I am no longer in control of my system if I install software via snap. As you point out, that only specifically affects the software installed via snap. But in the case of LXD, I would argue it's a pretty fundamental component of a container hosting system. By design, it requires significant access to all guest containers. The potential for something bad and out of my control happening, is way too high for my liking.

In my mind, that especially becomes an issue with regard to TurnKey. TurnKey users trust us and IMO with good reason. We're not perfect, not by a long shot. But we tend to err on the side of caution and think things through pretty carefully. We seek to balance security, user friendliness, reliability, stability and user experience in general. Ultimately we seek to empower users to take control their systems.

To my mind, snaps are philosophically headed in the wrong direction. Don't get me wrong, from a technical developer perspective, they seem pretty damn awesome. The idea of packaging software for consumption cross platform, in a single complete environment is pretty cool and on face value, incredibly attractive. And I can certainly see why many end users may find the thought of install and forget, pretty appealing.

I guess some of my concerns also apply to Debian security updates within a TurnKey appliance. But I know the level of security that Debian provides for it's security repo (only a limited number of well vetted Debian Security Team members have push access, binaries are built from source and new packages need to be signed off by multiple team members). I also know that any updates that are pushed via auto security updates are minimalist patches that change as little as possible. We are (almost) guaranteed that the API, commandline options, etc will not change between security updates, unless there is a really good reason. Other than specific edge cases (e.g. Samba 4.1 in Jessie being updated to Samba 4.2) the version rarely ever change without my explicit instructions.

I know that can be a double edged sword, as it also means I may be missing out on new features. But IMO on a server, the less that changes without my explicit request, the better!

As I understand it, that is not how it works with snaps. Software installed via snap can have updates pushed by the developer at any time. As I noted in my post above, that has been somewhat mitigated by new update options within snap. But it still doesn't allow the system administrator to block updates altogether, nor does LXD appear to provide an LTS/security-update only channel.

Without knowing more about the snap security measures, I really don't like the thought of a team I essentially know nothing about, having push access to my server. Especially a piece of likely vital infrastructure as a container hosting software (i.e. LXD). Not that I don't trust the LXD team to do the right thing, I do. It would be suicide for them to knowingly push something malicious or even really broken.

But I don't know what security mechanisms they may have in place to ensure that no malicious (or even poorly tested) software get's pushed to my system. I don't know how closely they guard their credentials and/or keys that allow them to push updates to me. I don't know what holes there may be in their deployment process which could be intercepted or interfered with by someone with malicious intent (e.g. MITM attack). I don't know how well they vet new developers that have push access to snap packaged software.

Whilst I'm sure that they test every software release, how can we be sure that the software contained within the snap will continue to play nice with the rest of my system? What if a newer version of an LXD snap relies on some new snapd functionality which isn't yet available in the version of snapd that I have installed? At worst LXD might break, at best it will not update - which I consider an issue when I'm under the impression that I can "set and forget" (it auto updates right?!).

What happens if a buggy version of LXD gets pushed to my system which breaks things? FWIW, that's one of the main reasons why we moved from Ubuntu to Debian base for TurnKey. Ubuntu pushed a number of buggy security updates which broke people's systems (and I've heard that they've unintentionally done that a few more times since). Even since we switched, Debian (with a much more robust testing regime IMO) have pushed a couple of buggy sec updates (although nothing to the same degree as the issues we had with Ubuntu - broken cron anyone?!). So I'm not talking about some theoretical possibility, I'm talking about something that happens, even when you have a whole Security team testing, reviewing and vetting each other's code.

Even if we put the risk of something bad creeping in aside, we still have the issue of major version bumps. What if an API call your system relies on gets removed in a new version of LXD? What if the behaviour of a specific command or commandline switch changes in a new version? What if some LXD functionality that one of my old containers relies on, changes or is removed completely?

They may seem unlikely scenarios right now, but they are not without precedent and IMO it's only a matter of time before something like that happens. IMO, the less control I have over my system, the more chance there is of something going wrong when I least expect it. At least if I break my own system, I know what lead to it and I'm right there at the terminal already.

And last but certainly not least, the reason why I love Linux so much is that I have the power over, and control of my system. Having updates forced on me which (potentially and sometimes really) break my system is one of the main reasons why I was so happy to have moved away from Windows! I don't want that infecting my Linux!

Bit of an idealist rant I know, but I'd like to think that my perspective is not complete tinfoil hat stuff.

Having said all that, to elaborate on the closing remark of my last post: I'm not fundamentally opposed to installing specific software via a snap in a TurnKey appliance. However, I would not want it to be the only option for TurnKey users to run full Linux containers (i.e. LXC/LXD). And I would also want to spell out my concerns, so users could make an informed decision.

John Carver's picture

I thought I would report here my experience in trying to use snapd to install LXD in a TurnKey LXC appliance.

  • First, I created a new container from the v.14.2 tkldev image. Named it tkldev-15. 
  • Second, I configured it to use my github credentials and local apt-cacher-ng, I then applied Jeremy's tips for developing 15.0 on v.14.2 tkldev.  See https://gist.github.com/JedMeister/ad6f3b405a889b62985dc933431f7dd1
  • Using git, I cloned the lxc project from https://github.com/Dude4Linux/lxc/tree/updates-for-15.0-release
  • Ran make, and the project built to completion with a product.iso.  Checked root.patched and verified it was 15.0-beta.
  • Using fab-chroot build/root.patched, I tried to install snapd and LXD.  apt-get install snapd ran to completion and installed several dependent packages.
  • snap install LXD, however, failed as snapd was not running.
  • investigation showed that 
    • snapd won't run in a chroot, because systemd is not available
    • snapd won't run in a TurnKey appliance because systemd has been disabled and replaced by sysvinit.
  • My conclusion is that snapd can't be used in v.15.0 unless the build system is rewritten to use systemd and tkldev rewritten to replace aufs with overlayfs and systemd-nspawn instead of chroot.  Seems like an awful lot of work.

 

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

We may have to work around some issues within the root.patched chroot, due to systemd not always playing nice inside a chroot, but otherwise it shouldn't be an issue (so long as thre are SysVInit scripts). As you can see on the bug report, I did initially investigate using systemd-nspawn (sort of like LXC itself really - chroot on steroids) but we ended up just developing a wrapper script for the service command (which will get cleaned up by the build process).

The usage of SysVInit (instead of SystemD) in the v14.x containers was to workaround a bug in the version of SystemD in Debian Jessie. The issue has been fixed in newer versions of SystemD (and packaged in Stretch). So we intend to revert to SystemD in the LXC container builds for v15.x anyway.

So my guess is that it will probably work ok once we get there. Although personally, I'm still hoping for a backported LXD package to appear in the Debian repos. Even if LXD themselves no longer create packages other than snaps, there's nothing stopping distros from creating their own packages! :)

John Carver's picture

FWIW I was able to compile and install LXD using fab-chroot and following the instructions at https://github.com/lxc/lxd. I had to add gcc to the list of prerequisites. I was able to start the LXD daemon and run 'lxc list' so there doesn't appear to be any major roadblocks. However there was no 'make install' target so all the SysVInit scripts and SystemD support would have to be created or copied from the Ubuntu debs.  Compiling from source leaves behind a lot of cruft so I wouldn't recommend doing this as part of the build process.

I'm facing a similar situation with the OpenVAS appliance.  The v6 version available in Stretch is old and out of date.  The version I want to use is v9 is available in Buster, but no sign of a backport yet.

For now, I will focus on converting root.patched into an LXD image so I can test without needing a VM.

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Good to know that a source install works. Although I agree, we should leave that as a last resort. Whilst on some levels I think a source install is preferable to a snap install, at least on my own personal systems. But I think for TurnKey users, it may be a bit too much to expect them to be able to do future updates from source. Ideally I'd most like to see a Debian package, but failing that a snap install is still an option in my mind (despite my rant the other day!)

Re OpenVAS, backports can be requested via the debian-backports mailing list. Worst case scenario, we could get it backported ourselves (i.e. fund Chris Lamb to do it).

bmullan's picture

John et Al

These are a couple of the latest comments by Stephane Graber on the lxd user forum related to LXD, Debian and also SNAP vs PPAs.

Anyway, wanted to make sure you saw Stephane's comments:

https://discuss.linuxcontainers.org/t/lxd-natively-on-debian/1318

 

John Carver's picture

Thanks Brian

Stephane says,

I believe the current packaging effort is therefore centered around LXD 2.0.x (LTS branch), meaning that you’d be able to get LXD 2.0.10 (or whatever the latest stable release is) natively through Debian but would then need to use the snap for something more recent.

I am currently using version 2.21-0ubuntu in the Portable Development Environment.  I had to pull this version from xenial-backports to get some needed features.  I'm afraid using the Debian LTS version won't meet our needs for an LXD appliance. Bummer!!

Information is free, knowledge is acquired, but wisdom is earned.

John Carver's picture

@Jeremy, did you see my comments on your gist about creating bootstraps for Debian stretch?  It's been a few days and I didn't want you to miss them if it will help you prepare for the v15.0 release.

See https://gist.github.com/JedMeister/ad6f3b405a889b62985dc933431f7dd1

Information is free, knowledge is acquired, but wisdom is earned.

Jeremy Davis's picture

Thanks for the bump. I'm not sure why, but I don't seem to have got notifications about your posts on that gist?! Even though I would have expected to get notifications anyway, plus you explicitly mentioned me?!

Anyway, good work on your posts. FWIW there is actually a bootstrap in that same bucket, although I did forget to explicitly note that!:

http://dev-apt.jeremydavis.org/bootstraps/bootstrap-stretch-amd64.tar.gz
http://dev-apt.jeremydavis.org/bootstraps/bootstrap-stretch-amd64.tar.gz.hash

And I have no issue with people using that bucket, just so long as it's not being abused (because I pay for excessive traffic). Hopefully it won't be long and we'll have those packages merged into the default TurnKey repo. And we'll have a proper automated dev-testing repo soon too! (fingers crossed...). Then instead of dev-test.jeremydavis.org it'll be a normal archive.turnkeylinux.org repo (probably just called 'stretch-testing').

I'll probably make up a refreshed bootstrap build sometime soon. That will be up on the official mirror before we officially release any v15.0 appliances; with the possible exception of the Core RC.

FWIW, if you wanted to play around with TKLDev v15.0 you can. It's a bit convoluted and will require building your own packages of fab etc, but it's possible.

bmullan's picture

LXD and LXC 3.0.0 were released last week and feature several major new features including but not limited to these two:

lxd clustering - turns multi-host's for LXD containers to operate as one giant LXD host server

lxd clustering FOSDEM 2018 video:  https://www.youtube.com/watch?v=DVqMeo3lvv0&t=293s

lxd-p2c a tool to copy a physical system (real or VM) into an LXD container.  I would assume this may enable someone to use Turnkey to create a physical install or VM and then use lxd-p2c tool to convert/copy that physical install or VM into either a "local" or "remote" Host machine running LXD 3.0.

Once they verify that the new LXD container is running they could then decommission the phyiscal machine or VM 

lxd-p2c FOSDEM 2018 video:  https://youtu.be/JKztAWZOj9g

imho - the orchestration & management of local/remote nodes/lxd containers in the new lxd clustering seems much easier than kubernetes and may scale equally well but watch the video to see what you think.

The reason I included the mention of both the new LXD 3.0 release features of lxd-p2c and the lxd clustering was that a work-flow may be:

  1. create a turnkey vm or machine
  2. convert/copy the vm or machine to an LXD server somewhere using lxd-p2c app
  3. add the LXD server to an LXD cluster of remote/local Host machines (could be cloud based?)
  4. manage & scale the new turnkey lxd container using the normal LXD cli commands as you would if they were just a single local machine

Anyway I thought I'd let you know this all may be much easier now.

 

 

 

 

 

 

 

John Carver's picture

Thanks Brian, As usual you are ahead of me when it comes to LXC/LXD news. And to think I just spent the last several months developing lxd-make-image which does something similar to lxd-p2c. If the latter will convert a product.iso then it's exactly what we're looking for.  I don't consider my time wasted, however, because I learned a lot in the process. 

My work has stalled for a bit because I got hit with a bug in the last Ubuntu kernel update.  My laptop gives four errors NMI ... hardlock cpu0, cpu1, cpu2, cpu3, then stops dead. Pretty sure it was the kernel update which may have been corrupted.  Until I get this figured out I'm SOL.

Information is free, knowledge is acquired, but wisdom is earned.

bmullan's picture

I don't know if this will help you folks with LXD on Debian (I use ubuntu) but I found this tool on GitHub which creates an .DEB file to install the latest LXD on either Debian 9 or Debian 10.

https://github.com/AlbanVidal/make-deb-lxd

Brian

 

 

 

Jeremy Davis's picture

That looks great. I'm not sure when I'll have a play, but that's certainly a handy tool for when I do!

bmullan's picture

Just some FYI...

Google announced today at Google I/O 2018 that ChromeOS will now run regular Linux applications using Crostini and containers.

Crostini's containers are LXD.

I got confirmation from Stephane Graber (LXD/LXC project lead) and from several others on the Crostini subreddit.

This opens up both LXD and linux apps to potentially hundreds of thousands of ChromeBooks & Tablets.

Hopefully, Turnkey can eventually work with LXD so your apps can also work in this new huge new market as well.

here are some links on Reddit & the announcement by Stephane Graber

https://www.reddit.com/r/Crostini/new/


https://www.reddit.com/r/linux/comments/8i00bh/chrome_os_is_based_on_linux_but_you_cant_easily/?st=jgzjrclq&sh=6cd867dc

and

https://insights.ubuntu.com/2018/04/24/lxd-weekly-status-44
 

Jeremy Davis's picture

Yes, that certainly adds to the impetus to ensure that TurnKey runs smoothly as an LXD container! Thanks for the heads up on that!

bmullan's picture

Just any fyi

LXD 3.3 has been released

With LXD v3.3 release they notes indicate the LXC-to-LXD conversion tool has been updated.   Just fyi in case this helps.

New features

Rewrote and improved lxc-to-lxd

Our LXC to LXD migration tool has been rewritten in Go to match the rest of our codebase.

It now uses the LXD migration API to transfer the containers (similar to lxd-p2c) and has support for both LXC 2.x and 3.x.

https://discuss.linuxcontainers.org/t/lxd-3-3-has-been-released/2327

bmullan's picture

I had the time today to convert one of the TurnkeyLinux  apps to an LXD container using the lxd-p2c tool.

On my home PC I use the SNAP version of LXD.  the SNAP LXD is now available for most Distro's to enable them to run/use LXD.

When doing the one-time "sudo lxd init" I answered the question "Did I want LXD available over the internet" with Yes

Then there are a couple more related questions ending with what Password should be used to authenticate remote access to the local LXD Daemon.

Once that was done

I downloaded the Sahana .ISO to my PC

Created a KVM VM with the Sahana .ISO (a virtualbox or vmware workstation VM would work equally as well).

Then on my local PC I used GO to compile the latest lxd-p2c from the GO source using the instructions on this linuxcontainers forum post:

https://discuss.linuxcontainers.org/t/howto-use-lxd-p2c/3574

I then used SCP to copy the lxd-p2c executable to the Sahana VM.

Finally, logged into the Sahana VM I executed the following command (note the above URL of the web post had the command options wrong) 

# ./lxd-p2c https://<IP_of_PC_w_LXD>:8443 / sahana

the above command uses lxd-p2c to copy the rootfs of the VM to the IP_of_PC_w_LXD and and creates a new LXD container named "sahana".

  • took about 3 minutes to copy the Root Filesystem of the VM to my home PC's LXD Daemon,
  • create a new LXD container called sahana
  • start the sahana container

Back on my PC (ie not in the VM) I could check that the container was running:

$ lxc list sahana  

+----------+--------------+---------------------------+---------+------------------+------------------+
|  NAME  |    STATE    |         IPV4                    | IPV6    |    TYPE            | SNAPSHOTS |
+----------+--------------+---------------------------+---------+------------------+------------------+
| sahana | RUNNING   | 10.105.172.253 (eth0) |              | PERSISTENT |              0         |
+----------+--------------+---------------------------+---------+------------------+------------------+

Pointing my PC's web browser to:

http://10.105.172.253

and the Sahana web page pops up... so it works !

Brian

 

 

bmullan's picture

I'm able to now:

snapshot the sahana LXD container

export the sahana LXD container to other LXD systems

clone/copy the sahana container to make more

create an LXD cluster of sahana using the LXD cluster feature/command.

 

Jeremy Davis's picture

That sounds pretty cool. Thanks for sharing your experience! I'm sure others would find that of interest.

Although I note that it's really only going to be relevant for individuals wishing to migrate an existing server to LXD, or for personal usage. I say that because each server would essentially be a "clone" of the original (rather than an individual server). So it would have the exact same "secrets", such as any usernames, passwords and emails, as well as using the same salt for password hashing etc. So long as the user is aware of those limitations it would be ok, but not really suitable for distribution.

If you wished to ensure that each LXD server is unique, then prior to generating the "clone" I would recommend resetting the inithooks and following many/most of the tweaks that we do for the current LXC builds (some may likely be redundant though). That will ensure that each LXD guest you create is a truly unique server. Then each individual server could be preseeded, or initialised via first login (as our current LXC build does).

I started outlining the process that you might need to take, but I got quite bogged down on details... So I backed out of that. If you are interested in trying to do that for your ISO -> LXD servers, then I suggest that you have a look at our bt-container script (all of buildtasks is written in bash so hopefully ti should be relatively straight forward to follw). That's what we use to transform the ISO into the current LXC build. As I noted above, some of that may be redundant. At least a few things we do are likely covered by their conversion process. Although there are certainly some things that we do that wouldn't be covered, for example, they would be unaware of our inithooks/firstboot processes.

If you want to dig into that a little deeper, please feel free to ask if you have any questions.

Jeremy Davis's picture

That might sort of work, although I strongly suspect that the end result would be sub-optimal. It would still require some additional effort on behalf of the end user to get everything up and running. Plus there would be security implications until the user completes the inithooks. That's possibly not a huge deal for a user who understands what is going on, what is required and understanding of the implications of leaving it unconfigured; but really not ideal for general distribution.

By default all "headless" appliances (i.e. ones that do not have a proper terminal - e.g. LXC guests) generate random passwords on firstboot and are then "fenced" until the user (re)sets all the passwords (the fence blocks all access except via SSH until the inithooks are completed). The ISO builds do not include all the components required to provide the fence. The random generation and fence is quite important because otherwise, the server will contain some secrets, etc left over from build time. For example, for about ~70% of the library, anybody with access to Adminer, could use the "default" password to modify the DB! Under normal circumstance, because of the way we implement things, there is no way to leverage any "default" passwords, but would not be the case in an LXD server built from an ISO.

Whilst under LXC/LXD the user can set the root password at launch time, none of the other passwords would be set. Without the tweaks we provide, the user would need to manually launch the firstboot scripts. I also suspect that the default inithooks would still be running, but in the background, with the user having no way to connect to them. So the user would likely need to manually kill them first. Otherwise the inithooks would continue to run on everyboot (still in the background) and I'm not really sure what further implications that might have. I'm not sure, but I suspect that it may cause the passwords set by the user to not survive a reboot.

Like I said though, the tweaks we provide are all scripted via buildtasks. So you could create a script to run within the ISO guest prior to conversion to LXD. Then you could re-implement everything that we do, so it all works as intended. It might take a little effort to get it all working nicely, but I think that the end result would be worth it. Then you would have an LXD appliance which should "just work" the same as our existing "headless" appliances.

Jeremy Davis's picture

On a system with a "proper" terminal (e.g. a VM and/or an ISO install), the inithooks (i.e. firstboot scripts - where passwords etc are set) block the starting of the other services. In an LXC/LXD container, you don't have a "proper" terminal (because it's completely "headless"). So whilst the inithooks will still start, they can't attach to the terminal, so will run in the background. That will not be apparent to a user, unless they go looking for it.

We work around the limitations of containers with some funky tricks contained within buildtasks. I've already mentioned the "fence", but another of the important components of that we're using dtach to link the running session of inithooks to the login session (dtach is sort of like a super slim alternative to screen). Incidentally our default LXC containers use an alternate SystemD service, especially written for Linux containers, not the default service which runs on ISOs. ). So that way, the user is greeted with the inithooks on firstboot - same as when running on other platforms such as VM.

There's no reason why you couldn't reapply those same tweaks that we use, but you'll need to read through the bt-container script to see what needs to be done. As our scripts are (also) converting the ISO to a(n LXC) container build, some of the work we do can almost certainly be skipped. The conversion script you are using will do much of the same stuff that we do.

IMO, if TurnKey were to provide LXD builds, the best path would be to leverage the existing LXC buildtask and add in any additional steps which may be required. Sort of like the way that we extended our original VMDK build to also produce the OVA build. From my understanding, John has already done a lot (perhaps all?) of this work, although IIRC his scripts leverage the build process prior to the ISO being produced (i.e from the rootfs). To ensure consistency between all the builds, all of our alternate builds are generated, starting with the ISO (which is unpacked and adjusted as need be).

If you're interested in pursuing that further (i.e. beyond your own personal needs) then I suspect that it would be possible to leverage John's work, in combination with the existing buildtasks scripts. It's a little sucky that they have written their tools in Go (don't get me wrong, IMO Go is a good choice, just I don't know it...). So looking at the script probably won't be much help to me. Although, it might be interesting to compare the contents of the generated LXD container against our default LXC build and/or John's LXD build to see what's different. I don't have the time or energy right now, but if someone would like to upload an LXD build of a TurnKey appliance somewhere, I could probably have a look at some point.

Marcos's picture

Guys im sure many things have evolved in the last 6 to 8 years in this matter. We are now on Core 16 and TKL keeps solid and working well.

How do you manage to deploy cloud proxmox lxc on production and develop them in local LXD in your laptops maybe @jed and John could give me some tips

Jeremy Davis's picture

John hasn't been about for a while, so I'm not sure if we'll hear from him. And unfortunately I am completely unfamiliar LXD (never used it). When I'm away from my Proxmox server, I either spin up an instance (on AWS) via the Hub or run a local instance in VirtualBox.

Sorry I don't have anything better for you. Having said that, It looks like LXD is close to making it into Debian. Once it makes it's way into 'testing', then we can request a backport. Once we have a native install to work with, then I'd like to revive our LXC appliance - except as an LXD appliance. I'm not sure how practical that will be, nor what sort of timeframe it might take to get there.

John Carver's picture

Hi Guys, John is still lurking around but my interests have moved on to other projects such as Raspberry Pi development. One of the last TurnKey related projects I was working on was running LXD on my Ubuntu laptop in what I dubbed, the TurnKey Portable Development Environment (PDE).  Although it was developed on Ubuntu 16.04, I was able to run it on 18.04 and 20.04 before my laptop died. You can find my latest efforts at https://github.com/Dude4Linux/turnkey-pde. I can't remember if the conversion scripts were working for TurnKey 15.x but I think that's where I stopped. The examples show version 14.2.

Marcos, I hope you'll be able to find something helpful there to get you started.

John

Information is free, knowledge is acquired, but wisdom is earned.

Marcos's picture

John such an onnor to have an answer from you and im glad that the old and bowy John Carter is still sailing.

So basically what i have done is almost the same as you propose in your git repo, BUT today LXD is a Snap package that can be installed trought snap install or without big headaches in Debian.

 

Im still having problems converting the image back to the PMX envirment, everytime i export a LXC container (that has been developed locally) and generate a tar.gz ou zsdt image and try to run it in the Proxmox it run but looks like all the changes i made don't take effect. Altought the images are snapshoted before the export. Is a very very weird behaviour that i currently can't find any logic at all.

About Raspberrys i my self have been trying to help the OpenCPN and OpenMarine communities (Embeded hardware for ships and sailing vessels) and have a little bit of expirience in that.

In the health and clinical eng envirment normally we use smaller embeeded devices that are centralized trought a midleware raspeberry. Hasura and Proxmox work extremlly well and stable.

Is almost a shame to me to say that but the budgeting in brazil is pretty bad for healthcare so what we have to do is do what we can, rasp and atmels have been a curse and also a bless for us in this latitudes.

A big hugh for you all guys thanks for the encoragment and sharing all this. So glad and happy of beeing part of a community like TKL

Jeremy Davis's picture

Awesome thanks John!

Glad to hear that you're still floating about! :)

If you're into playing with RPi now, have you seen the (preliminary/not-yet-official) TurnKey Raspberry Pi builds? They still have some rought edges and I'm super keen to get back to it and get them "offical", but it may be of some interest?!

Take care and look after yourself. Thanks again for dropping and sharing your wisdom and experience.

Add new comment