You are here
Announcing TurnKey OpenStack optimized builds
As we mentioned before, making TurnKey easy to deploy on as many public and private clouds is an important goal for us. Unfortunately there are too many players in the cloud software space for us to support every single one. It's much easier to put effort into making TurnKey work well with the winning horses.
TurnKey has been supported on the leading public cloud platform Amazon EC2 from early on, not to mention simplifying management and deployment via the Hub.
OpenStack is particularly interesting, because as it is most likely the future of open source clouds.
I originally got intrigued when I heard about NASA planning to open source Nebula in 2009, which has become the basis for Nova, the compute component in OpenStack. Since then, I've been following OpenStack development from a far and have been itching to develop support for TurnKey on the platform.
The time has finally arrived, and I'm pleased to announce TurnKey optimized builds are hot out of our build farm, and available for immediate download and deployment.
You can get them from the "Download -> More Builds" link on the appliance pages.
TurnKey OpenStack optimized builds
- EBS auto-mounting support: we've updated our custom EBSmount mechanism for OpenStack, which automatically mounts EBS devices when attached.
- Support for automating instance setup: via the user-data scripts mechanism.
- Automatic APT configuration on boot: saves bandwidth costs by using the closest package archive for maximum performance.
- SSH key support: instances that are launched with a key-pair will be configured accordingly.
- SSH host key fingerprints displayed in system log: verification of server to prevent man-in-the-middle (mitm) attacks.
- Randomly generated root password: is set on first boot, and displayed in the system log **.
- Randomly generated mysql/postgres passwords: the MySQL root and/or PostgreSQL postgres passwords are set to to the same random password as root **.
- Instance metadata python library and CLI: used internally, but useful for advanced users. (learn more).
** Because OpenStack builds are used in headless deployments (without a console), they include an inithook which preseeds default values, and random passwords:
/usr/lib/inithooks/firstboot.d/29preseed MASTERPASS=$(mcookie | cut --bytes 1-8) cat>$INITHOOKS_CONF<<EOF export ROOT_PASS=$MASTERPASS export DB_PASS=$MASTERPASS export APP_PASS=turnkey export APP_EMAIL=admin@example.com export APP_DOMAIN=DEFAULT export HUB_APIKEY=SKIP export SEC_UPDATES=FORCE
Depending on your use case, you can utilize user-data (note the security implications) to preseed during boot, or once the system has booted by executing turnkey-init.
Exemplary import of TurnKey Core on OpenStack
There are several ways of uploading an image into an OpenStack deployment, below is one way to get you started.
[update] Please note: apparently the 'glance add
' command has apparently been depreciated. Please see Dmitry's post below.
# cd /tmp
# tar -zxf turnkey-core-11.3-lucid-x86-openstack.tar.gz
# ls turnkey-core-11.3-lucid-x86
turnkey-core-11.3-lucid-x86-initrd
turnkey-core-11.3-lucid-x86-kernel
turnkey-core-11.3-lucid-x86.img
# IMG=turnkey-core-11.3-lucid-x86
# glance add -A $GLANCE_TOKEN \
is_public=true \
container_format=aki \
disk_format=aki \
name="$IMG-kernel" \
< /tmp/$IMG/$IMG-kernel
Added new image with ID: 5
# KERNEL_ID=5
# glance add -A $GLANCE_TOKEN \
is_public=true \
container_format=ami \
disk_format=ami \
kernel_id=$KERNEL_ID \
name="$IMG" \
< /tmp/$IMG/$IMG.img
Added new image with ID: 6
# glance -A $GLANCE_TOKEN index
ID Name Disk Format Container Format Size
-- ---------------------------------- ----------- ---------------- ---------
6 turnkey-core-11.3-lucid-x86 ami ami 688498688
5 turnkey-core-11.3-lucid-x86-kernel aki aki 4179712
# euca-describe-images
IMAGE ami-00000006 turnkey-core-11.3-lucid-x86 available public machine aki-00000005
IMAGE aki-00000005 turnkey-core-11.3-lucid-x86-kernel available public kernel
Comments
Great guys!
I'll be downloading/testing some of the appliances by the weekend and come back with some feedback. It's very nice to have the TKL arsenal at my disposal in openstack.
Nice one Alon
I have heard a bit about this tech. And although I'm very happy with my PVE server (especially now it has the TKL library in OVZ at my fingertips!) I might have to set up a testing OpenStack server perhaps to have a look at what all the fuss is about!
Looking forward to hearing about your experience Adrian.
Keep up the great work guys! :)
Sweet!
Going to spin up an instance at Rackspace and give it a try :-)
You guys rock!
Did it work?
Hi flexbean, did you ever get this to work with Rackspace? Wanna share your experience?
TurnKey Core 12.0rc available for OpenStack
Following the announcement of TurnKey Core 12.0rc (ISO, Amazon EC2, OpenVZ), we've released an OpenStack optimized build (download).
Please note that TKL 12.0 (based on Debian Squeeze) requires the initrd for deployment in OpenStack, for example:
If you come across any issues, or have ideas on improving the optimized build, please post a comment.
Thanks Alon!
I was missing the initrd part. I'll try that and see how it goes...
Same thing happens to me since icehouse release
We can boot the image but logs are not shown on screen as before. Seems the logs now are not shown in stdout since icehouse version.
Would be great to find a workaround.
Regards
Thanks so much!
No route to 169.254.169.254 when using a network without router
Hi folks,
we have been using turnkey linux appliances for a while and they work like a breeze in Openstack when using a private network that uses an internal gateway to connect to internet. Openstack neutron routes the connection to metadata IP through this gateway.
But now we have a public network with an external router, and the only way to route to the metadata IP is adding a static route to 169.254.169.254. Most of the openstack images get this information from DHCP and they can connect to the metadata IP, but it seems that the turnkey base images does not read this information when the dhcp server delivers it. Hence no route, no acces to userdata and metadata.
I would like to know if anybody has found the same problem and how they fixed it. I'm considering modifing the AMIs and adding the static route manually for our provider (https://cirrusflex.com aka StackOps Service Provider), but maybe it's overkill.
Regards
Hi Sorry we missed this
What sort of VM backing does your system use?
Pages
Add new comment