TurnKey Linux Virtual Appliance Library

Comparing Debian vs Alpine for container & Docker apps

Background: For TurnKey 15 (codenamed TKLX) we're evaluating a change of architecture from the current generation of monolithic systems to systems as collections of container based micro-services. Essentially the service container replaces the package as the highest level system abstraction.

There are several layers to the new architecture, but the first step is to figure out the best way to create the service containers. Alon has been quietly working on this for the last couple of months and managed to slim down Debian to 12MB compressed for the base image:

https://github.com/tklx/base

https://hub.docker.com/r/tklx/base/

With Anton's help we added PoC tklx containers for Mongodb, Nginx, Postgres, Apache, Django and others:

https://github.com/tklx

https://hub.docker.com/u/tklx/

So far the most thought provoking question we've received is: why are we using Debian for this instead of Alpine Linux, the trendy minimalist upstart blessed by the powers at Docker?

That is a very good question, and it deserves a good answer.

Alpine Linux has its roots in LEAF, an embedded router project, which was in turn forked from the Linux Router on a Floppy project.

As far as I can tell Alpine would have stayed on the lonely far fringe of Linux distributions if not for Docker. I suspect a big part of Dockers motivation for adopting Alpine was the petabytes of bandwidth they could save if people using Docker didn't default to using a fat Ubuntu base image.

Debian is superior compared to Alpine Linux with regards to:

  • quantity and quality of supported software
  • the size and maturity of its development community
  • amount of testing everything gets
  • quality and quantity of documentation
  • present and future security of its build infrastructure
  • the size and maturity of its user community, number of people who know its ins and outs
  • compatibility with existing software (libc vs musl)

Alpine Linux's advantages on the other hand:

  • it has a smaller filesystem footprint than stock Debian.
  • its slightly more memory efficient thanks to BusyBox and musl library

Alpine touts security as an advantage but aside from defaulting to a grsecurity kernel (which isn't advantage for containers) they don't really offer anything special. If anything the small size and relative immaturity of the Alpine dev community makes it much more likely that their infrastructure and build systems are compromised. Debian is also at risk but there are more eyes on the prize, and they're working to mitigate this with reproducible/deterministic builds, which isn't on Alpine's roadmap and may be beyond their resources.

Though Alpine advertises a range of benefits the thing its dev community seems to obsess about the most is size. As small as possible.

Regarding the footprint, Alon showed you can slim down Debian so the footprint advantage is small. If that isn't enough we can take it one step further and use Debian Embedded to slim things down further by using BusyBox, and smaller libc versions, just like Alpine.

Choosing Alpine over Debian for this use case trades off people-oriented advantages that increase with value over time (skilled dev labour, bug hunters, mindshare, network effects) for machine-oriented advantages (storage and memory) that devalue rapidly thanks to Moore's Law.

I can see Alpine's advantages actually mattering in the embedded space circa 2000, but these days Debian runs fine on the $5 Raspberry Pi Zero computer, while the use case Alpine is actually being promoted for are servers with huge amounts of disk space and memory by comparison.

Maybe I'm missing something but doesn't that seem awfully short sighted?

OTOH, I can see how from Docker's POV, assuming bandwidth isn't getting as cheap as fast as storage or memory, and they're subsidizing petabytes of it, swinging from the fattest image to the slimmest image could help cut down costs. I bet Docker also like that they can have much more influence over Alpine after hiring its founder than they could ever hope to have over a big established distribution like Debian.

Summary of Debian pros:

  • vastly larger dev & user community   
    • more packages   
    • more testing   
    • more derived distributions   
    • more likely to still be in robust health in 10 years
  • working towards reproducible builds
  • better documentation
  • libc more compatible than musl, less likely to trigger bugs
  • more trustworth infrastructure

Summary of Alpine pros:

  • lighter: community obsessed with footprint
  • musl: more efficient libc alternative
  • simpler init system: OpenRC instead of systemd
  • lead dev & founder is a Docker employee
  • trendy
You can get future posts delivered by email or good old-fashioned RSS.
TurnKey also has a presence on Google+, Twitter and Facebook.

Comments

Hans Harder's picture

I have both Turnkey and

I have both Turnkey and Alpine lxc images running at the moment.

I use Alpine when I need single services running in a container which are available on Alpine, but sometimes you run into problems with alpine, because of the musl library used.  sometimes buidling from source does not work and on Debian it is no problem.

Its great to see that you trimmed done the Debian container to 12 MB, I am gonna try that out.  If it is similar in size, my preference goes to Debian.... although I like the simplicity of Alpine

Alpine rcinit system I find much better for an lxc setup... simple and straightforward.

QUOTE:  ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol

Hans Harder's picture

any idea how to get this base

any idea how to get this base working in a vanilla lxc enviroment.

I am missing a  /sbin/init usually a softlink to systemd, but i cannot find a systemd or systemctl 

 

QUOTE:  ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol

Liraz Siri's picture

That's a good question Hans

I'll have to ask Alon about that and maybe we can update the documentation...
Hans Harder's picture

no problem... Its a

no problem...

Its a difficult decision, debian or alpine,  the debian base is about 55 MB uncompressed, alpine about 6 Mb. So if you want containers for single services... almost nothing can beat Alpine...  For libc dependencies I added the libc apks from https://github.com/sgerrand/alpine-pkg-glibc   These seems to work ok with somelinux binaries I needed to run...    

QUOTE:  ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol

Hans Harder's picture

Stick to Debian.  I noticed

Stick to Debian.  I noticed that I am getting into problems when I needed perl modules which are not in the default distribution of Alpine.  Then I have to use alternate ways in getting those modules, having all kind of problems with dependencies/different interface and update synchronisations.

I find that almost all used modules are existing in Debian, that probably is also the case  for php or python or ...

The 40MB extra storagesize in that case is not a problem.  

Also with Docker you can do a base image of TKL debian and then on top have the images of the services...  so really with multiple services the extra space is no reason to switch.  Personally i don't like Docker, I rather have containers running using a stock LXC which I can tailor to my needs...  so keep on supplying those base images please...

 

QUOTE:  ech`echo xiun|tr nu oc|sed 'sx\([sx]\)\([xoi]\)xo un\2\1 is xg'`ol

Liraz Siri's picture

If you don't like Docker, RKT is also an option

If you don't like Docker, the containers are also compatible with RKT, which some consider technically superior in certain respects, though it certainly isn't as well marketed as Docker.

Anyhow, we do plan on sticking with Debian and if necessary extending it to fit the use cases we're trying to solve rather than seeking a new starting point. Debian has the largest community of any Linux distribution and the network effects are powerful enough that I don't see how a temporary technical advantage by another distribution could possibly offset that.

Containers that run on bare metal, not VMs...

Hello! Did you think about building TKLX over SmartOS / Triton?
https://github.com/joyent/triton

Containers that run on bare metal, not VMs... promising re/ security & performances, see https://www.joyent.com/triton

I have a dream that one day every appliance would just deploy, run and backup on such a container-native infrastructure :)

Liraz Siri's picture

OpenSolaris is dead, long live Illumos !

At the moment, we couldn't seriously consider building TurnKey on top of OpenSolaris based technology, at least not exclusively. Unfortunately the future viability of the communities supporting that technology is uncertain. IMHO, it's much more likely that Linux will close any remaining technological gaps than OpenSolaris and its descendants will catch up in terms of the adoption and mindshare critical for their continued long term existence.

Had Sun moved to open source Solaris a decade earlier, I'd probably by writing this on an OpenSolaris base system. Unfortunately, once they realized Linux was disrupting their business it was too late to catch up. The train had left.

Once Oracle bought up Sun the future stewardship of Sun's open source efforts became even more uncertain. Oracle, true to their culture, didn't take too long to squash the fragile open source communities around Sun's "too late, too little" open source efforts.

Guest's picture

Triton and sun's technology

Obviously you know nothing about triton or smartos, you do however know about illumos

"Once Oracle bought up Sun the future stewardship of Sun's open source efforts became even more uncertain. Oracle, true to their culture, didn't take too long to squash the fragile open source communities around Sun's "too late, too little" open source efforts. "

while it is true that oracle did squash opensolaris they did not however squash the opensource community as evidenced by illumos and it's child smartos. SmartOs is a hypervisor that yes is built out of the illumos kernal and leverages the illumos gate and pkgsrc repositories. There are several very much alive projects that spun out of the forked opensolaris codebase and many improvements many made by joyent.

Also, triton is a node.js and c based cloud system that uses SmartOS as it's hypervisor which provides three methods of getting things done.

1. Zones = similar to jails or even LXC or simply put containers

2. Docker = ported to opensolaris technology by joyent another container

3. KVM = ported to opensolaris technology by joyent for non unix/linux support

I don't think the other user actually was asking you if you would make TKLX using a opensolaris or illumos base but more rather developing TKLX to leverage the triple play hypervisor capabilities of SmartOS and the triton cloud itself. Which in my opinion blows away most other cloud systems as evidenced by the 1.8 billion dollar purchase of joyent's triton and smartos by samsung last month.

1.8 billion, i would hardly call that dead squahed by oracle little to late on sun's part. This move by samsung proves that this technology is alive and viable. You have the turnkey hub which uses amazon however triton is a complete opensource cloud system complete with it's solaris based hypervisor which you could use to break away from amazon and stand up your own turnkey cloud that out the box supports bare metal lx brand zones/containers, bare metal docker containers, and qemu-kvm vps.

I could be wrong but i do belive the other users was actually asking you have you considered using triton the fact it is open source and can be deployed easier then any other cloud, is used in the enterprise and it's triple play capabilities to define a new modal for TKLX in that you would create a base debian image as you already have make it run on smartos zones, smartos docker and smartos kvm. you already have docker covered which is compatible with triton, rtk is covered, iso, ec2 images so all you would need its a zone or lx brand image and a kvm and turnkey could be deployed to any of smartos's capabilities and be orchestrated by the triton cloud.

point being that it's a cloud based on opensolaris technology however you would not stop what your doing now and throw away debian in favor of solaris or illumos as it is the same concept as any other cloud and will run your debian based product on triton and smartos so really you would only convert for example your vmdk to kvm images and create a zone image your docker work will run out the box on the smartos hypervisor

just saying ... Oh and by the way not bashing i love what trunkey does and congrats on creating a really slim debian base i love it.

 

TKLX on SmartOS / Triton

Patrick has made my point. Thank you!

TKLX on SmartOS / Triton, that's just a dream... I didn't know it was worth a billion dollars :)

 

Guest's picture

sorry actually 1.2 billion

sorry actually 1.2 billion not 1.8 my bad lol

Liraz Siri's picture

You're right, I'm not familiar with SmartOS / Triton

To be honest I hadn't even heard of those projects before they were mentioned in this comment thread. I took a quick look, discovered the technology was derived from OpenSolaris / Illumos, and reported back on my somewhat hasty conclusion. Thanks for taking the effort to explain why you thought I was mistaken.

The very strong endorsement from two community members indicates it may be wise to take a deeper look on whether this is something we could leverage for TKLX. I don't think I've encountered such enthusiastic support for any other hypervisor.

We'll still need to consider other factors besides the technology, such as the vitality of the open source community relative to other options, but that's mainly an issue if we're cornered into picking a winning horse. As a counter example of when that's not the case, the base images for TKLX support both Docker and RKT.

Guest's picture

TKLX and triton would be nice

TKLX and triton would be nice and can be done easily.

TKLX micro-services architected using the container autopilot pattern using ContainerPilot for orchestration with Consul for service discovery and TKLX would be a nice platform that indeed would still run anywhere not just on triton.

That's gold open source isn't it?

Definitely worth having a look at :)

Triton DataCenter is an open-source cloud management platform

https://github.com/joyent/triton

ContainerPilot is an application-centric micro-orchestrator

https://github.com/joyent/containerpilot

SmartOS / Container hypervisor

https://www.joyent.com/smartos

Liraz Siri's picture

We will!

The enthusiasm is a signal we should take a closer look at this stuff. Will do!
Guest's picture

Here is why I am enthused

Turnkey is a wonderful project and provides a solid platform for getting things done. Triton and SmartOS provide TKLX direction and architecture guidelines.

1. SmartOS as a hypervisor provides more deployment options then any other hypervisor on the market. Sure Xen, ESXi and the others work but they focus on one technology mainly Virtual Machines. Proxmox and some of the others including Xen and VMware are integrating docker or openvz containers but no other hypervisor provides bare metal access to Containers, Docker and VM's in one stack like SmartOS does.

2. Triton as a management platform and datacenter is based on node.js and C and it does not get any better or easier then this. Triton is well thought out and very opinionated and stands up gracefully without all the headaches of say deploying an openstack system. Triton is more then just a cloud with smartos as it's hypervisor it is a complete datacenter.

3. Using the container autopilot pattern http://autopilotpattern.io/ provides a clear architecture for developing TKLX micro services and even standard VM's using this in conjunction with containerpilot https://github.com/joyent/containerpilot which automates the process of discovery and configuration that's needed to connect application components in separate containers and Consul https://www.consul.io/ which makes it simple for services to register themselves and to discover other services via a DNS or HTTP interface. Using a scheduler like Docker Compose, Mesosphere Marathon or even Google Kubernetes, TKLX becomes more then just a bunch of container apps they become self aware and capable of scaling in the datacenter with fault tolerance and high availability.

4. architecting for triton does not make TKLX locked as the micro services will still run anyware containers, docker and vm's can be run on any cloud in any data center.

5. Because it's so easy to deploy triton the turnkey project can leverage triton to deploy it's own datacenters breaking away from amazon ec2. This cuts out any middle man and allows all monies collected to go to the project. Using server wholesale ISP's like OVH.com which rents cloud grade servers for super cheap in multiple datacenters worldwide it would be actually affordable to stand up a triton datacenter in a geographical distributed nature fast.

6. Building TKLX and further orchestrating Turnkey to leverage this technology coupled with a graphical drag and drop interface to triton would allow enterprises customers to visually build complete self aware networks with all the bells, for example building TKLX micro services for:

   A. An Input Firewall

   B. HA Proxy Load Balancer

   C. NAS Storage

   D. MySQL

   E. Ruby on Rails

   F. An output rules filter

in this example all of the above are TKLX containers architected using the technologies i mentioned. They are all self aware and created to work with a drag and drop interface and represent classes. these classes can be visually drag and dropped on a grid that represents what the user wants so say we drag a input firewall then wire that to a HA proxy that wires to three ruby on rails classes that share a NAS service and wire to a Mysql cluster that then connects to an output filter for internet access and the we press run and it uses one of the schedulers to stand up a complete firewalled, load balanced ruby on rails cluster with database clustering and NAS storage.

thats what triton and a little bit of love and coding can bring to TKLX. I have deployed a 3 node 12 core 24 thread 48 gig ram triton cluster in 1 hour flawless as my test lab.

 

Guest's picture

only supports the v1 Docker Registry API...

Unfortunately, imgadm's support for pulling Docker images still only supports the v1 Docker Registry API. That is quite old at this point and many images in Docker Hub (i.e. docker.io) no longer support v1. Basic support for the v2 registry API is there in the 'node-docker-registry-client' that imgadm is using, but it would still take some coding work on imgadm to get it doing v2 pulls. See #644 for that work 

SmartOS as a Virtualisation Platform

Hello! An interesting blog post from /proc/mind

To me SmartOS looks like the perfect virtualisation platform, one of the most advanced platform hypervisors OSes if not the most advanced platform hypervisor OS these days.

 31/01/2016

SmartOS as a Virtualisation Platform

Virtualisation platforms and technologies represent a big focal point of the technology scene these days.

Recently I’ve watched a dockercon 2015 presentation by Bryan Cantrill, CTO of Joyent, an OS kernel developer for 20 years and father of Dtrace as he calls himself [1], about how to debug docker containers gone bad in production. [2]

I recommend to anyone working or thinking of working with linux containers and docker especially to watch this presentation !

I have to say that this is one of the best presentations I’ve seen when it comes to showing the full picture of the docker tooling, lifecycle and ecosystem.

In his presentation, Bryan brings the operational point of view of running applications, in docker, in a production environment and how to deal with and debug failure when applications inside docker containers go wrong.

It is very rare to see a good presentation on the failure modes of docker since most presentation and talks focus on why docker is amazing and it will solve all your problems.

Irrespective of using docker or not, applications have bugs, they go wrong, it is very important to have adequate tooling and discipline to debug and improve them.

Towards the end of his talk Bryan shows some amazing tools and services that the Joyent team has built since 2004 when they’ve started their journey as a “cloud platform” company.

All these tools are built upon their platform hypervisor called SmartOS. [3]

The presentation plus the details I’ve read about SmartOS intrigued me, and I gave SmartOS a spin in a KVM virtual machine to see what it can do.

What is SmartOS

Disclaimer: I’m no authority on SmartOS, I’m relaying to you what I’ve found out about it until now.

Go and search for yourself to find out more.


Historically speaking SmartOS derives from the Solaris OS. [4]

A fork of Solaris, called OpenSolaris, was created in 2004.

After the Oracle aquisition of SUN Microsystems in 2010, a group of Solaris engineers created the illumos kernel [5] which was used subsequently to power OpenIndiana from which SmartOS sprang.

The Solaris kernel developers have started working on OS virtualisation since 2005, it looks like they are 10 years or so ahead of the Linux containers and it shows. [6]


SmartOS is not a general purpose OS, it appears to be designed from the ground up to run virtual workloads.

It is effectively a Read-Only (almost full RO) platform hypervisor running in RAM and managing different kinds of virtual workloads.

SmartOS can run these virtual workloads at the same time using the same tooling:

  • fully emulated virtual hardware VMs, achieved by using the KVM hypervisor
  • 3 types of OS virtualisation, sharing one OS kernel between multiple partitioned zones ( called containers in Linux land ):
    • it can run SmartOS zones, called joyent brand zones
    • it can run Linux zones, called lx brand zones. This allows a user to run a full Linux userland on the SmartOS UNIX kernel
    • docker containers from the docker hub, still called lx brand zones and running on the same SmartOS UNIX kernel

Because SmartOS is built on the powerfull legacy of Solaris zones, it has a very useful and powerfull feature compared to Linux containers: complete zone isolation !

From a security point of view SmartOS zones ( read containers ) are fully isolated, an attacker that has been able to gain root privileges in the zone cannot gain root access on the host hypervisor. [11]

I’ve heard that this is why the Joyent cloud runs containers on bare-metal, while other cloud providers like AWS or Google run containers in VMs.

Ramdisk is where SmartOS feels at home

General purpose OSes have to be installed on disk to function.

SmartOS on the other hand boots of and ISO or USB stick or PXE booted and it runs entirely in RAM. It has no installation to disk option.

Here are some arguments about why booting from RAM is a feature in SmartOS. [7]

The SmartOS hypervisor/OS, or what is called the global zone, [10] is mostly Read-Only.

I’ve seen recently in the Linux world this kind of approach by the people behind CoreOS. Surely they can draw more inspiration from the SmartOS/OpenSolaris developers.

How can anyone test it ?

I’ve tested it by using the SmartOS iso and booting it in a KVM VM.

I could have achieved the same thing by booting of the SmartOS USB drive.

If you have a type of virtualisation on your laptop/desktop ( KVM, Virtualbox, VMware …) than you can give it a spin in a VM. [8]

What can a user run on SmartOS ?

A user can run KVM VMs and SmartOS OS virtualisation zones.

Since I’m running SmartOS in KVM, even if I have enabled KVM passthrough on my desktop, I haven’t tried to run KVM VMs because the boot sequence of SmartOS says that KVM is not supported on my VM, therefore I’ve only been able to run zones.

SmartOS hypervisor

After booting from the iso image or the USB image, you’ll follow a few basic questions to setup networking and the ZFS pools in the global zone.

SmartOS global zone

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@smartos ~]# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet 127.0.0.1 netmask ff000000
rtls0: flags=1004943<UP,BROADCAST,RUNNING,PROMISC,MULTICAST,DHCP,IPv4> mtu 1500 index 2
        inet 10.110.110.131 netmask ffffff00 broadcast 10.110.110.255
        ether 52:54:0:33:ea:3a
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128
[root@smartos ~]#  
[root@smartos ~]# uname -a
SunOS smartos 5.11 joyent_20160121T174331Z i86pc i386 i86pc
[root@smartos ~]# zonename
global

Once that is done you’re all setup to start running virtual workloads.

Another very useful feature of SmartOS is that SmartOS treats all 4 types of virtualisation described above as the same thing:

  • a disk image of some type
  • a bit of json metadata
  • a virtualisation wrapper( KVM, zones) that starts using that disk image and the json metadata

All 4 types of virtualisation are created, lifecycle managed, and destroyed using the exact same tools:

  • disk image manager imgadm
  • virtual machine manager vmadm

That is it!

No more running docker ... or rkt .. for a container workload, then qemu-system-x86_64 or interfacing with libvirt for KVM VM, each coming with its own tool for creating, lifecycle managing and destroying the virtual workloads.

Disclaimer: all zones that I’ll show you how to start are started using the “admin” networking, which basically means they’ll all be in bridged network mode and you’ll be able to access them on your internal network as if they were other separate physical hardware.

SmartOS zone

Lets run another instance of SmartOS, as an isolated zone, lets say for SmartOS package building !

  • find some SmartOS disk images provided by Joyent:

find SmartOS datasets

1
2
3
4
5
6
7
8
9
10
11
imgadm avail |grep base | tail -n10
3c0e76fe-0563-11e5-a0d7-9fe1e24b554c  base-multiarch          15.1.1      smartos  zone-dataset  2015-05-28
2bd52afe-3474-11e5-b07d-c7fb14b2c9e8  base-32                 15.2.0      smartos  zone-dataset  2015-07-27
5c7d0d24-3475-11e5-8e67-27953a8b237e  base-64                 15.2.0      smartos  zone-dataset  2015-07-27
9caff6c6-3476-11e5-9951-bf98c6cb8636  base-multiarch          15.2.0      smartos  zone-dataset  2015-07-27
7bcfc9c8-6e9a-11e5-8d57-73e262d7338e  base-32                 15.3.0      smartos  zone-dataset  2015-10-09
842e6fa6-6e9b-11e5-8402-1b490459e334  base-64                 15.3.0      smartos  zone-dataset  2015-10-09
9250f5a8-6e9c-11e5-9cdb-67fab8707bfd  base-multiarch          15.3.0      smartos  zone-dataset  2015-10-09
543ef738-beb5-11e5-bf3d-675487324488  base-32-lts             15.4.0      smartos  zone-dataset  2016-01-19
96bcddda-beb7-11e5-af20-a3fb54c8ae29  base-64-lts             15.4.0      smartos  zone-dataset  2016-01-19
f58ce4f2-beb9-11e5-bb02-e30246d71d58  base-multiarch-lts      15.4.0      smartos  zone-dataset  2016-01-19
  • download the zfs volume into your local pool
1
2
imgadm import 96bcddda-beb7-11e5-af20-a3fb54c8ae29
...
  • create a json description of the zone you’d like to start
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
{
 "brand": "joyent",
 "image_uuid": "96bcddda-beb7-11e5-af20-a3fb54c8ae29",
 "alias": "smartosz01",
 "hostname": "smartosz01",
 "max_physical_memory": 512,
 "quota": 10,
 "resolvers": ["8.8.8.8", "208.67.220.220"],
 "nics": [
  {
    "nic_tag": "admin",
    "ip": "10.110.110.142",
    "netmask": "255.255.255.0",
    "gateway": "10.110.110.1"
  }
 ]
}
  • start the SmartOS zone from the disk image downloaded and the json description
1
2
vmadm create -f smartos-zone.json
Successfully created VM 16021e9e-7e2f-4294-f7db-86dea02198be
  • we can see the zone running by interogating the global zone stat tool ( equivalent to top )
1
2
3
4
5
6
7
8
9
prstat -Z
...
  2933 root       23M   15M sleep    1    0   0:00:00 0.0% fmd/28
  7582 root     6992K 1128K sleep   51    0   0:00:00 0.0% sshd/1
   168 root     4516K 2752K sleep   29    0   0:00:00 0.0% devfsadm/8
ZONEID    NPROC  SWAP   RSS MEMORY      TIME  CPU ZONE
     6       15   55M   33M   0.8%   0:00:02 1.4% 16021e9e-7e2f-4294-f7db-86d*
     0       55  322M  194M   4.7%   0:00:20 0.7% global
     2       10  251M   31M   0.7%   0:00:00 0.0% eb5b5aad-54c6-6915-c1c8-9cc*
  • either login in the zone from the SmartOS console via zlogin or simply ssh using the ip in the json description
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
zlogin 16021e9e-7e2f-4294-f7db-86dea02198be
...
[Connected to zone '16021e9e-7e2f-4294-f7db-86dea02198be' pts/4]
   __        .                   .
 _|  |_      | .-. .  . .-. :--. |-
|_    _|     ;|   ||  |(.-' |  | |
  |__|   `--'  `-' `;-| `-' '  ' `-'
                   /  ; Instance (base-64-lts 15.4.0)
                   `-'  https://docs.joyent.com/images/smartos/base

[root@smartosz01 ~]# zonename
16021e9e-7e2f-4294-f7db-86dea02198be
[root@smartosz01 ~]# uname -a
SunOS smartosz01 5.11 joyent_20160121T174331Z i86pc i386 i86pc Solaris
[root@smartosz01 ~]#
[root@smartosz01 ~]# psrinfo -v
Status of virtual processor 0 as of: 01/31/2016 20:15:15
  on-line since 01/31/2016 20:12:58.
  The i386 processor operates at 3600 MHz,
        and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 01/31/2016 20:15:15
  on-line since 01/31/2016 20:12:59.
  The i386 processor operates at 3600 MHz,
        and has an i387 compatible floating point processor.
  • install packages and start building

LX branded zone - full Linux userland

This type of virtualisation resembles to OpenVZ or LXC virtualisation in Linux, a full OS operating system running in a “container”

This time we’ll boot a full debian8 userland on the SmartOS kernel using the lx branded zone.

We’ll follow the same steps and use the same tools to boot into the debian8 zone as we did for the SmartOS zone.

  • find some debian disk images provided by Joyent:
1
2
3
4
5
6
7
8
9
10
11
imgadm avail |grep debian|tail -n10
a781a350-07f4-11e5-9372-5f2886027fbc  lx-debian-7             20150601    linux    lx-dataset    2015-06-01
1187b54a-15ca-11e5-a80c-275e2f64f91e  debian-7                20150618    linux    lx-dataset    2015-06-18
82d952c4-1b7b-11e5-a299-bb55cb08eab1  debian-7                20150625    linux    lx-dataset    2015-06-25
a00cef0e-1e73-11e5-b628-0f24cabf6a85  debian-7                20150629    linux    lx-dataset    2015-06-29
d8d81aee-20cf-11e5-8503-2bc101a1d577  debian-7                20150702    linux    zvol          2015-07-02
2f56d126-20d0-11e5-9e5b-5f3ef6688aba  debian-8                20150702    linux    zvol          2015-07-02
380539c4-3198-11e5-82c8-bf9eeee6a395  debian-7                20150724    linux    lx-dataset    2015-07-24
7c815c22-4606-11e5-8bb5-9f853c19be54  debian-7                20150819    linux    lx-dataset    2015-08-19
5fb104e4-6af5-11e5-a952-ff6eb14ca518  debian-7                20151005    linux    lx-dataset    2015-10-05
1adf7176-8679-11e5-9ff7-3beedf8060b9  debian-8                20151109    linux    lx-dataset    2015-11-09
  • download the lx-dataset zfs volume into your local pool, the zvol volume is for KVM VMs
1
2
imgadm import 1adf7176-8679-11e5-9ff7-3beedf8060b9
....
  • get the kernel_version from the zfs volume metadata
1
2
imgadm show 1adf7176-8679-11e5-9ff7-3beedf8060b9 |grep kern
    "kernel_version": "3.16.0"
  • create a json description of the zone you’d like to start
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
 "brand": "lx",
 "image_uuid": "1adf7176-8679-11e5-9ff7-3beedf8060b9",
 "alias": "debianz01",
 "hostname": "debianz01",
 "kernel_version": "3.16.0",
 "max_physical_memory": 512,
 "quota": 10,
 "resolvers": ["8.8.8.8", "208.67.220.220"],
 "nics": [
  {
    "nic_tag": "admin",
    "ip": "10.110.110.145",
    "netmask": "255.255.255.0",
    "gateway": "10.110.110.1"
  }
 ]
}
  • start the debian zone from the disk image downloaded and the json description
1
2
vmadm create < /root/zones-specs/debian-lx-zone.json
Successfully created VM 28bef743-dc95-c0c9-ed90-9c0bcf31bef8
  • either login in the zone from the SmartOS console via zlogin or simply ssh using the ip in the json description ( note the virtual linux in the output of uname )
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
zlogin 28bef743-dc95-c0c9-ed90-9c0bcf31bef8
[Connected to zone '28bef743-dc95-c0c9-ed90-9c0bcf31bef8' pts/11]
Linux 28bef743-dc95-c0c9-ed90-9c0bcf31bef8 3.16.0 BrandZ virtual linux x86_64
   __        .                   .
 _|  |_      | .-. .  . .-. :--. |-
|_    _|     ;|   ||  |(.-' |  | |
  |__|   `--'  `-' `;-| `-' '  ' `-'
                   /  ;  Instance (Debian 8.1 (jessie) 20151109)
                   `-'   https://docs.joyent.com/images/container-native-linux
...
apt --version
apt 1.0.9.8.1 for amd64 compiled on Jun 10 2015 09:42:07
Usage: apt [options] command
...
 uname -a
Linux eb5b5aad-54c6-6915-c1c8-9cca817b4b4b 3.16.0 BrandZ virtual linux x86_64 GNU/Linux
  • use the debian8 zone as you see fit

LX branded zone - docker container

This is still an LX branded zone ( Linux userland on SmartOS kernel ) but it will boot and run a docker disk container from docker hub. [9]

The interesting part is that docker containers on smartOS appear on the network bridge like any other VMs if you launch them on the “admin” network.

Lets launch a docker container in a SmartOS zone:

  • add the docker hub source for imgadm
1
imgadm sources --add-docker-hub
  • import the disk image ( this import downloads from the docker hub, not from joyent )
1
2
3
imgadm import busybox
Importing 0be24e0e-04e4-6110-9ea4-dd6264d65cb0 (docker.io/busybox:latest) from "https://docker.io"
...
  • create the zone specification
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
{
"alias": "busybox",
"image_uuid": "0be24e0e-04e4-6110-9ea4-dd6264d65cb0",
"nics": [
    {
        "interface": "net0",
        "nic_tag": "admin",
        "gateway": "10.110.110.1",
        "netmask": "255.255.255.0",
        "primary": true,
        "ip": "10.110.110.146"
    }
],
"brand": "lx",
"kernel_version": "3.13.0",
"docker": true,
"cpu_shares": 1000,
"zfs_io_priority": 1000,
"max_lwps": 2000,
"max_physical_memory": 256,
"max_locked_memory": 256,
"max_swap": 1024,
"cpu_cap": 1000,
"tmpfs": 1024,
"maintain_resolvers": true,
"resolvers": [
    "10.10.1.7",
    "8.8.8.8"
],
"internal_metadata": {
    "docker:cmd": "[\"/bin/sleep\", \"300\"]"
},
"quota": 7
}
  • start the docker container zone from the disk image downloaded and the json description
1
2
vmadm create -f docker-busybox-lx-zone.json
Successfully created VM e931e355-4b09-e248-b8fe-c538c279dfe3
  • list the running zones
1
2
3
4
5
[root@smartos ~]# vmadm list
UUID                                  TYPE  RAM      STATE             ALIAS
e931e355-4b09-e248-b8fe-c538c279dfe3  LX    256      stopped           busybox
16021e9e-7e2f-4294-f7db-86dea02198be  OS    512      running           smartosz01
eb5b5aad-54c6-6915-c1c8-9cca817b4b4b  LX    512      running           debianz01
  • depending on what you’re running in the docker container either login in the zone from the SmartOS console viazlogin ( same result as docker exec ), or simply ssh using the ip in the json description, or just access the application running in the docker container

Conclusions

SmartOS comes by default equiped with:

  • ZFS as the default filesystem, ZFS being the most advanced filesystem today
  • illumos kernel and zones for OS virtualisation which can give you a better resource utilisation and it has security features built-in
  • DTrace which is the most advanced debugger to date
  • KVM to be able to virtualise other operating systems other than SmartOS, running on SmartOS

From the SmartOS wiki: [12]

1
2
3
4
5
6
7
An important aspect of SmartOS is that both OS (Zones) and KVM virtual machines are both built on Zones technology.
In the case of OS virtualization, the guest virtual machine is provided with a complete userland environment on which to run applications directly.
In the case of KVM virtualization, the KVM qemu process will run within a stripped down Zone.
This offers a variety of advantages for administration, including a common method for managing resource controls, network interfaces, and administration.
It also provides KVM guests with an additional layer of security and isolation not offered by other KVM platforms.
Finally, VM's are described in JSON.  Both administrative tools, imgadm and vmadm, accept and return all data in JSON format.
This provides a simple, consistent, and programmatic interface for creating and managing VM's.

I’m impressed by its virtualisation tooling consistency and by the OS feature set as a virtualisation platform !

To me SmartOS looks like the perfect virtualisation platform, one of the most advanced platform hypervisors OSes if not the most advanced platform hypervisor OS these days.

Resources and inspiration

http://www.procmind.com/blog/2016/01/31/smartos-as-a-virtualisation-plat...

VladGets's picture

Its working

Its working

What the side of the page?

Post new comment

The content of this field is kept private and will not be shown publicly. If you have a Gravatar account, used to display your avatar.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <p> <span> <div> <h1> <h2> <h3> <h4> <h5> <h6> <img> <map> <area> <hr> <br> <br /> <ul> <ol> <li> <dl> <dt> <dd> <table> <tr> <td> <em> <b> <u> <i> <strong> <font> <del> <ins> <sub> <sup> <quote> <blockquote> <pre> <address> <code> <cite> <strike> <caption>

More information about formatting options

Leave this field empty. It's part of a security mechanism.
(Dear spammers: moderators are notified of all new posts. Spam is deleted immediately)