You are here
Beta of TurnKey Core on Ubuntu 10.04 LTS
Well, it took a little longer than expected, but we are pleased to announce that TurnKey Core - the common base for all appliances, has been released based on Ubuntu 10.04 LTS (Lucid Lynx).
Ubuntu 10.04 LTS will be supported for five years.
This is a beta release, so take it for a spin, let us know what you think. If you come across any issues, please report them. If you have ideas on how to make it better, let us know.
All other (beta) appliances based on Ubuntu 10.04 LTS will be released in batches in the following weeks leading up to the official release, which is planned for the beginning of August. This is to coincide with the release of Ubuntu 10.04.1, which is recommended for production deployment.
Changes
Bootsplash
The bootsplash menu has been updated. Install to hard disk is now the first option, selected by default. Live system has been renamed to Try without installing. A warning message will be displayed when running in live non-persistent mode.
Recommended packages _not_ installed by default (APT)
This is not really a change from TurnKey Core 8.04, its actually the same configuration. The change is notable because Ubuntu (since 8.10) install recommends by default. We chose to keep the old configuration as TurnKey appliances are minimal, and only include what needs to be included. We believe this is the right decision, if you think differently, we'd love to hear your thoughts.
Byobu - Screen for human beings
While attending the Ubuntu Developer Summit (UDS) for Maverick, I was introduced to byobu by its developer - Dustin Kirkland. I found byobu much more user friendly than screen, as well as informative with its notification plugins (e.g., memory and processor usage, package upgrades, clock). We decided not only to include it in Core, but also launch it by default. Again, we'd love to hear your thoughts on this decision.
To get you started, here are some of the keyboard shortcuts (see the manual for more info: man byobu):
- F2 - Create a new window
- F3 - Move to previous window
- F4 - Move to next window
- F6 - Detach from this session
- F8 - Re-title a window
- F9 - Configuration Menu
Improved terminal
The bash configuration has been customized to included colored output (ls, grep, etc.) as well as a 2 level max prompt (e.g., instead of /usr/share/doc/foo/bar/xyz only bar/xyz will be displayed). The bash-completion package is also installed by default, which we find very useful. In addition, we have also added ~/bashrc.d support seeded with some configuration scripts, one of them being penv which Liraz and I use all the time, more on that later...
Syslog upgrade
The system and kernel logging packages (sysklogd and klogd) have been replaced with rsyslog, an enhanced multi-threaded syslogd with awesome features. This change is inline with Ubuntu who made the move in Ubuntu 9.10. The Webmin syslog configuration has been tweaked accordingly.
GRUB-PC (aka. GRUB2)
Our installer (di-live) has gone through a major upgrade and now supports GRUB-PC, a cleaner design of its predecessors with more advanced features. The default configuration has been slightly tweaked to display a timeout by default, run in console mode, and be more verbose.
All other changes are available in the changelog.
Features
- Base distribution: Ubuntu 10.04 LTS
- Runs on bare metal in addition to most types of virtual machines (e.g., VMWare, VirtualBox, Xen HVM, KVM).
-
Installable Live CD ISO:
- Supports installation to an available storage device.
- Supports running in a non-persistent (demo) mode.
- Auto-updated on firstboot and daily with the latest security patches.
-
Easy to use configuration console:
- Displays basic usage information.
- Configure networking (supports multiple network interfaces).
- Reboot or shutdown appliance.
- Ajax web shell (shellinabox) - SSH client not required.
- User friendly screen wrapper (byobu) launched by default on login.
-
Easy to use web management interface (Webmin):
- Listens on port 12321 (uses SSL).
- Mac OS X themed.
-
Network modules:
- Firewall configuration (with example configuration).
- Network configuration.
-
System modules:
- Configure time, date and timezone.
- Configure users and groups.
- Manage software packages.
- Change passwords.
- System logs.
-
Tool modules:
- Text editor.
- Shell commands.
- Simple file upload/download.
- File manager (needs support for Java in browser).
- Custom commands.
-
Regenerates cryptographic keys on first boot:
- SSL certifcate used by webmin, apache2, lighttpd - /etc/ssl/certs/cert.pem.
- SSH keys.
- Console auto login when running in live/demo mode.
- username root
- no password (user sets password during installation)
Call for testing and feedback
We need your help in testing the beta releases, and your feedback to make the official release rock! What are you waiting for, get it here.
Comments
WooHoo! :)
Downloading now!
A few comments for starters:
Great initial feedback
As always, thanks for the great "initial feedback" JedMeister! Looking forward to more once you take it for a spin.
Installed flawlessly with no problems so far
although it did take longer to install than I remember the current (8.04 based) release taking (sorry haven't done any comparison testing to confirm this). It did surprise me how long it took considering I installed in a virtual environment (VMware Server on Win Server 2003). Although in fairness, the Win Server is pretty sluggish at the best of times. It seemed like the 'copying files' bit was the main holdup. I guess though even if that is true, its not like you want to be installing too many times anyway - its more the ongoing performance that matters and I'm sure that'll be great.
So far I have only had a little taster at this stage but looking good guys. I plan to do more testing/playing tonight & tomorrow. That will be on my Proxmox server so will probably get a fairer idea of install time then too.
I think some of the minor tweaks from the previous release - such as the adjustments to the bootsplash are minimal but make it a little bit more 'newb' friendly and accessable. It forfills the idea of bringing open source to the masses. I think its this polish and attention to detail that really makes TKL shine. It also shows your interest in humanity (not just showing off your leet skilz!)
I am definately impressed with the idea of Byobu and like having all that info there, its basically like a CLI desktop isn't it - what a great idea. TBH though I'm not yet completely convinced about all the colour (especially the dark blue, not very readable on a black background - could get hard on the eye on those late nights mucking around at the CLI).
More to come no doubt....
Install time
You'll notice that the 10.04 core beta is much larger than 08.04, weighing in at 144MB vs. 102MB, so the copying of files should take longer. But, there is not much reason for Core to weigh so much, we are looking into why this is, what we can do about it.
Another cause for the slowness, though I haven't tested it yet, is possible regressions with ext4. The Lucid Lynx release notes have a note on this:
As always, thanks for the feedback.
I actually didn't notice the significantly larger filesize
so thanks for pointing that out. I did not notice a significant increase in install time for 10.04 destop when I installed, although I don't tend to sit around watching a desktop install :) My Ubuntu desktops bootup and shutdown times are obviously faster since the introduction of ext4 and I was not aware of the potential regressions so thanks for pointing that out.
I just installed TKL Core 10.04 beta on my Proxmox server and the slow install time was not apparent at all. In fact the exact opposite! It took longer to partition the virtual drive than it did to copy files. From start (first boot of the new VM with the ISO loaded) to finish (first reboot into TKL conf console) it took less than 2 minutes to install. That is noticably (although probably not significantly) faster than Core 8.04 - I reinstalled that into an identical new VM to compare.
The thing that did interest me is that the 8.04 core (virtual) CPU usage sat between 84-98% usage (peeked at 102%). Whereas 10.04 sat between 92-104% (peeked at 112% - I've never seen that before - 104% is the highest I've ever seen it). Please note that all these CPU usage figures are from the Proxmox Admin WebUI so are not definative or realtime, although as that's where I'm getting all my data I would imagine they should be consistant VM to VM. As may also be apparent it reports in 2% increments.
I don't think the increased CPU usage is an issue on install (in fact its desirable IMO). But this higher CPU usage also seems apparent at idle. 8.04 idles at 0%, even when logged in (via Proxmox VNC window) it remains sitting at 0% (occasionally jumping to 2%). 10.04 Core OTOH idles at a consistent 2% and jumps to 6-8% when logged in. None of my other TKL 8.04 VM servers idle at anything above 0%.
What tools would you recommend for investigating this increased CPU load further?
[update] It fluctuates a bit but heres a screen shot while running top
[update 2] I ran top in the current stable (8.04 based) core (that has incidentally been running for about 9 days) and top is the only process that seems to be using any CPU at all (it fluctuates between 0.0 - 0.7)
Once again...
Fantastic work guys!
I'm still tied up with other commitments, but looking forward to giving this a spin.
Excited and Rearing to go
I have an appliance in mind to build a patch for the beta of the core. I couldn't be more excited. After having been introduced to byobu a few times, my first thought when I saw it incorporated was "oh no." But that anxiety was easily enough addressed: the useful keyboard shortcuts provided was all it took to settle me down.
I've tried two install of the Lucid Core beta, both into a VirtualBox machine. Both failed me, but for different reasons.
At home, I went through the install process, set the root password to blank, and carried on with the install. I was able to perform apt-get functions with no problems. However, when it came time to shift files into to the new VM, I found that SFTP was failing to connect. THe only thing I knew to try was another username password combo. I didn't think to try SSH. So I had to give that one up.
I tried again this morning, again with VirtualBox, this time with root/root credentials. Network configuration was bridged with my wired NIC. This time, I got no love from apt-get install. I rearranged my network a little, reconnected, renewed DHCP and still failed to get positive results. I did try a simple ping, which did also fail, incidentally when it was to a host outside our network. Pings inside our network succeeded.
I'm completely open to this being user error in both cases. But, I should add that I'm pretty familiar with the process.
Any thoughts? On either case?
SSH only allows blank passwords when running live
SSH (and SFTP for that matter) is configured only allow connections with a blank password when running in Live demo mode. This is for convenience. But, when the system is installed it will not allow blank passwords.
The installer should require you enter a non-blank password, so thats probably a bug. I'll look into it.
Regarding the networking issues, did you take a look at the settings you received via DHCP and confirm they are correct?
I understand the blank Password Issue
THe problem with blank passwords explains my first problem. However, my networking problem during my second trial doesn't seem easy to get at. DHCP settings looked spot on and consistent with my production machines. I tried bridged with wireless as well as bridged with wired ethernet. I also tried NAT, but that was a colossal failure the workings of which I don't understand.
DNS not assigned?
Please see JP's comment below.
apt-get install of TKLPatch for 10.04 based beta?
I notice that TKLPatch is not currently available for the 10.04/Lucid build. As a workaround I added the TKL 8.04/Hardy universe repo to the sources.list and TKLPatch seems to have installed ok although I haven't used it yet.
For others than are insterested:
(or replace nano with your choice of text editor) and then add the line:
Although it probably won't cause any problems in this instance, its not good practice to have repos for different releases enabled. So for good measure I commented it out after installing TKLPatch (added # at the start of the line).
I have also added a note on the wiki.
Added TKLPatch to the Lucid repository
Thanks Jedmeister, I added tklpatch to the Lucid repository and updated your note on the wiki.
No worries
If I'd known you'd be so on to it Alon I probably wouldn't've bothered with the long winded post. Oh well, all good! :)
Networking issues
Yay! Thanks for releasing the beta!
First a little bit about my install, I have installed to a VM on Windows 2008 Hyper-V.
Just to echo some of the comments above:
1. I have also noticed the slow screen refresh issues mentioned above. Even simply holding the enter key down on the keyboard shows noticeable lag.
2. DNS settings have not been setup correctly. Update - issuing the command "udhcp renew" resolved the DNS issue.
3. I am also unable to ping external to the box, however the Turnkey VM has correctly received an IP address from DHCP and I am able to telnet into the box.
4. Adding "hostname MACHINENAME" to the /etc/network/interfaces file under the correct interface no longer adds an "A" record to DNS
Peter.
di-live leverages d-i (debian-installer)
di-live is our custom installer, making it possible to install a "live" system to the harddisk. It does this by leaving most of the heavy lifting to d-i related packages, and their default configurations. For example, partitioning is handled by the partman packages, auto-partitioning is handled by partman-auto.
You do raise a valid point though, and it deserves more thought whether to preseed partman-auto with a different recipe. In the meantime though, you can always manually partition your system to your liking.
BUG #596551 - /etc/environment $PATH overwritten
I just raised a bug report #596551.
The path environment variable set in /etc/environment is being overwritten by the new .bashrc script.
Thanks
Thanks for the reporting the issue. We'll fix that in all upcoming releases.
Beta has slow console but there is a workaround
Hello guys,
well first thank you for Turnkeylinux, I believe this would make our life easier. Important points of TurnkeyLinux are: ability to be installed on bare metal and low footprint which is great when you don't have a ton of bandwidth.
Now, we had troubles with the bet packages with Ubuntu 10.0.04: indeed the shell was very slow. We got it back to the normal speed by removing some packages:
apt-get remove --purge byobu screen
then reboot and your are back to a normal speed.
Hope this helps!
Reply from Dustin Kirkland
I have to admit that I have
I have to admit that I have un-installed it, the lag while using it is bad. I might give it another go once that bug is resolved.
This is all good news for me!
I'm planning on running the official release version as a direct replacement for windows web server 2003*. Reboot time is somewhat important, if the websites are not showing customers will surf to the competition. Reliability is also an issue, hence I'm planing on running the official release version as....
:o)
*Mainstream support for Windows server 2003 ended yesterday; yep, I'm making the switch.
Why is reboot time important?
I would not think you'd have any need to reboot?
If your hardware is up to it (CPU x64 capable and supports virtual extentions) I strongly recommend ProxmoxVE as a hypervisor OS. That way you can run separate virtual servers for the different functions you want. Especially if you're new to Linux, you can play without fear of bringing your web server down. If you use the OpenVZ virtualisation then you'll be amazed how little overhead a number of virtual Linux servers use.
I still prefer Ubuntu for invalid reasons.
But I was trying to put together a vm today that started with an install script (.sh) that handled the make and the configure and the movement of files and the checking of libraries. It failed in Lucid beta, so I thought I'd give it a try in the Lenny beta. It failed at exactly the same point, but here's the rub: it took Lenny about 1/3 the time to process the script to the same point. I don't know what that points to, but I thought I'd offer it up. More info is better than...well, should be better.
I have very limited experience with Debian
but I have read many reports of Debian being much "snapier" (more responsive) than Ubuntu, even when configured similarly. I suspect that this comes down to the Ubuntu 'bloat'. Although bloat is generally used in a derogatory way, it usually adds features and to user experience - thats always the tradeoff.
I think the explanation of why this difference exists probably comes down to the userbase. (As you'd probably know) Ubuntu is a very popular distro with a really wide range of users at a vastly different levels of Linux/PC experience and it is packaged as a (somewhat) finished product, thus 'user friendliness' and cool features are particularly important. Debian OTOH seems to have a much more tech savy user base with more a focus on producing a more generic base product with the end user customising it to their individual usage scenario.
Somewhat important, a bit important
After doing updates, servers reboot.
If your website is not there when the customer surfs your way, the customer may go elsewhere. If the customer sends you an email which gets kicked back coz your server is rebooting, this makes the customer unhappy (or they throw a hissy fit coz they've paid you and now you’re rejecting their emails, so you must be running off with their money, and the world is ending - or sommat like that.)
Although undesirable, sometimes a reboot happens; a quick reboot is more favourable than a slow reboot. Unless Linux doesn’t reboot after updates, in which case it won’t be an issue anymore.
There is the old ‘turn it off and back on’ thing which also occasionally happens.
The reason I mentioned boot time, is that boot is about the only time I see the server doing anything. Stuff flashes up on screen, I can see how long it takes before it’s all done. It’s the only comparative benchmark I get. Usually they just sit there in the corner of the room doing their stuff.
Thanks for the ProxmoxVE recommendation, I've not got into virtual servers (yet), personal preference I guess, or habit. I never quite understood the reasons for running a virtual server as opposed to a real one. We have a business that runs a few websites, served by 4 servers: plus a spare that’s used for development (the fun bit). It’s an overgrown hobby, but it pays the bills.
:o)
Welcome to the wonderful world of Linux :)
Reboots are generally only required for kernel updates. From my experience TKL servers never need to reboot after the auto security updates install. It is not uncommon for Linux servers to have not rebooted for years (as opposed to at least once most months for Win servers - ie patch Tuesday). As an aside, the same is generally true for Linux desktops (although some GUI updates do require logoff, logon). Another useful feature of Linux is that its really easy to set up an update (apt) cache so your updates are only downloaded from the server once and then distributed amongst your machines (a bit like WSUS but SO much easier to setup and admin).
IMO using a VM environment has many, many advantages, one of the obvious ones to me is being able to get the maximum out of your hardware. Eg a Linux web serer will generally run quite happily with ony 512Mb RAM (obviously depending on load) - the rest is pretty much wasted.
With 4 physical servers I'd be even more inclined to run ProxmoxVE (or similar). PVE has a great clustering system so you can migrate VMs from hardware node to hardware node (eg if you want to do hardware maintenance). It also has an inbuilt automatic backup system which together with clustering is a great way to quickly recover from hardware failure. Also because of the low overheads (esp if using OpenVZ) you could ditch your dev server (or join it to your cluster) because you can do all your dev work within a VM on one of your existing servers.
Being able to migrate VM servers effortlessly between hardware nodes makes manual load balancing a breeze and I'm sure you could set up a reverse proxy to do auto load balancing between identical VMs on different hardware nodes. This would maximise effeciency and spread server load, making happier customers, with the added bonus of protection against many sources of hardware failure.
Whilst the setup I'm suggesting does create a level of redundancy, I believe that a lot of this redundancy is a positive thing in a production environment. For example your email and webservers can be separate enties, if for some reason your mail server crashes, then your websites won't also go down (and vice versa). Even without load balancing, it allows you to spread your clients web, mail and other servers between hardware nodes so again if a hardware node (or VM) goes down you'll only have a limited number of crabby customers (instead of all of them). Its great for development and allows for incremental updates and testing with new features and realworld side-by-side comparrison of alternative software solutions with out having to constantly muck around installing and reinstalling to hardware. Completely destroyed a server VM from an untested mod? No worries, just load a recent backup and go again within minutes! Even do a clean install from (an OpenVZ template) in less than a minute. You can even customise and preconfigure your templates reducing the time to roll out a new server to minutes instead of half hour plus.
One of the greatest features I find with PVE (which probably applies to other similar solutions too) is the WebUI. It is great for true headless operation of your server farm. You can realistically chuck out all your extra monitors and keyboards after install (I'd probably hang onto one set just in case).
Assuming your CPU supports PVE and you plan to go down the VE/VM path then I'd suggest that you maximise your RAM (if you haven't already). In my experience you can never have too much RAM - ProxmoxVE supports up to 64TB (which would probably be overkill). My Proxmox Server runs 12 VMs +/- (depending on what I'm playing with at the time). ATM I'm running 9 Linux servers (a couple for testing and dev - the others run all the time), a couple of Linux Desktop Distros (again for testing/dev and general playing - these don't run all the time) and one WinXP VM (which runs all the time). All this runs flawlessly on top of a 4-5yo desktop Intel Core 2 Duo system with 8GB of RAM. The bottleneck is the network speed (single NIC) connected through a cheap consumer grade router/modem (100Base-TX connections). Just a note with PVE: some (older) hardware does not play nicely with the latest 1.5 installation of PVE. As a workaround install 1.4 and then update to 1.5.
FYI 2.6.32 Kernel bug - probably non-critical for server but..
Just thought I'd share info about this bug. I don't think it's related to the sluggishness that others have reported (and I've experienced) as disabling screen appears to resolve that one. I think that it does help explain the increased idle CPU load that I have experienced (which is significantly better but still remains after disabling screen). This bug is certainly not a show stopper (unless you are relying on decent battery life on a laptop), but I thought I'd mention it as some may have an interest in reducing their power consumption and accompanying heat generation.
I have been installing (clean install upgrade) Ubuntu 10.04 on a number of work netbooks. I have noticed that they get hotter and have shorter battery life than they did under 9.10. I have discovered that this relates to a kernel bug which causes the kernel scheduler "Load balancing tick" to work overtime, especially under load.
I have confirmed that it also occurs on my desktop system (although TBH I hadn't noticed until I checked). Acording to comments on this bug report it is not just limited to Ubuntu 10.04, its actually a 2.6.32 kernel bug that effects multiple distros. It has also been reported as occuring in the Maverick/11.04 alpha (2.6.36 kernel). So it most likely affects server systems too (including this beta).
It seems to effect single and multicore CPUs but I have only confirmed it on Intel ones. This probably does not have a huge impact on server (or desktop) end users but I think its still unfortunate as it will obviously increase power usage and heat.
For any that are interested the bug report contains some workarounds. Many of them don't fix the problem but do help reduce CPU load. The only way to completely work around the issue is to use an alternative kernel - namely 2.6.31 (from Karmic/9.10) or there is a PPA of a patched Maverick kernel (2.6.36) - but when I checked just then its still in the build queue.
It seems likely that this bug will be resolved by the time Maverick releases. Hopefully the patch will be backported for the Lucid LTS kernel...
[edit] I have just tested a recent TKL Lucid Core install (in VBox on XP w/ AMD CPU) using powertop and the kernel scheduler "Load balancing tick" is causing the most wakeups at idle (~40-50/sec occasionally down to low 20s) but it is nowhere near the ~70+/sec I see on my desktop (baremetal w/Intel CPU). Unfortunately once I put a bit of load on it jumps to ~120-140/sec (which again is still lower than my desktop which jumps to 200+). As the load balancer seems to ramp up the more load there is on the system this discrepancy could well be explained by the fact that cpu load in Core is very minimal compared to my desktop setup.
Pages
Add new comment