Virtu All's picture

Hi guys
we have an instance on AWS with Turnkey Linux 7.6,Wheezy this instance had important application that required package upgrade as libcurl php5-memcached glib and many other dependency, we failed to upgrade changing many time the deb source list and many other steps (see the url at bottom of the page), finally we directly upgrade to the 8 version of Debian Jessie all was working fine but when we rebooted the machine failed and did not boot again.
We preferred to upgrade it because reinstall the new Turnkey lamp from scratch involved lots of work from different company to running different components on this server.
We opened a ticket to amazon and they replied in this way:

First of all, I need to explain that the AMI used is provided by the community and not supported by Amazon. Also it is using Debian that is not part of the supported third party software on AWS. [1]

All the activities done must be considered as my personal best effort on this.

In detail, the AMI ami-9cfda3ce is provided by TurnKey Linux [2] and it is supported through github. [3]

According to our screen-sharing, we identified that the issue is

root (hd0)

Filesystem type is ext2fs, using whole disk

kernel /boot/vmlinuz-3.16.0-4-amd64 root=/dev/xvda1 ro

initrd /boot/initrd.img-3.16.0-4-amd64

ERROR Invalid kernel: xc_dom_probe_bzimage_kernel: unknown compression format

xc_dom_bzimageloader.c:394: panic: xc_dom_probe_bzimage_kernel: unknown compression format
ERROR Invalid kernel: xc_dom_find_loader: no loader found

xc_dom_core.c:536: panic: xc_dom_find_loader: no loader found
xc_dom_parse_image returned -1

Error 9: Unknown boot failure

This problem occurs when an invalid kernel is used to run an Amazon EC2 instance.

Unfortunately the kernel image used linux-image-3.16.0-x-amd64 is compressed using xz and Amazon AWS cannot boot it.

To fix the issue you will have to run a kernel having the correct format. (gz)

In order to fix the problem the best solution from the Amazon support and infrastructure >perspective is launching a new working instance and migrate all the data from the volume >vol-1673bf18. I strongly recommend you to follow this way.

Alternatively, you may install a new kernel launching a debug instance and install a new kernel having the correct image format. [4] Because the kernel is provided by TurnKey as well, I would ask to the TurnKey software team member the appropriate kernel to install.
Please keep in mind that all this activity must be considered out of the AWS support scope and cannot be performed with the help of the AWS support representatives for all the reasons mentioned above.


Please let me know in case you have any doubt or if I can do anything more regarding this case.
AWS Support

Do you at Turnkey have any suggestion to solve this problem? We can mount the volume on a new instance and work on it.
Keep in mind that we tried all the suggestion without success in these url before upgrade the kernel

Thanks in advance

Virtu All's picture

current fstab file:
# /etc/fstab: static file system information. # <file system> <mount point> <type> <options> <dump> <pass> proc /proc proc defaults 0 0 /dev/xvda1 / ext4 defaults 0 0 /dev/xvda2 /mnt auto defaults 0 0 /dev/xvda3 none swap sw 0 0
and cuttent grub file:

and cuttent grub file:
### BEGIN /etc/grub.d/40_custom ### default 1 timeout 0 hiddenmenu title 3.16.0-4-amd64 root (hd0) kernel /boot/vmlinuz-3.16.0-4-amd64 root=/dev/xvda1 ro initrd /boot/initrd.img-3.16.0-4-amd64 title 3.2.0-4-amd64 root (hd0) kernel /boot/vmlinuz-3.2.0-4-amd64 root=/dev/xvda1 ro initrd /boot/initrd.img-3.2.0-4-amd64 ### END /etc/grub.d/40_custom ###


what need to change in order to boot on amazon instance?

Jeremy Davis's picture

If so try booting with the Wheezy kernel instead.

Are you using the Hub and doing TKLBAM backups? If so it's easy. Just restore your last backup to a new v14.0 (Jessie based) instance (the M in TKLBAM stands for migration).

Jeremy Davis's picture

Worst case scenario you could mount the volume of your server to a fresh AMI and recover your data that way.
Jeremy Davis's picture

Thanks for taking the time to post back and I'm glad to hear that you got your issues sorted.

FWIW even if you can't boot; you can still mount the volume. I meant mounting the volume (of your broken instance) to a new (not broken) instance as a secondary volume. You can then chroot into the old volume and install/remove/tweak packages etc (e.g. in this case you could have manually installed the Wheezy kernel). Anyway sounds like that is no longer relevant (although may be useful knowledge for future reference).

New HVM instances don't have this issue because they use a different mechanism at boot-time. Creating new HVM images has been on our roadmap for sometime but there have been some technical holdups. We are currently beta testing our HVM images and hope to release them ASAP.

Regarding TKLBAM; it's always useful! :) Sorry to be facetious. Did you test it, or just assume that it wouldn't work? By default it will backup everything in /var/www (the usual webroot) and all MySQL DBs. It also knows which additional packages that you installed from default repos and will install them on restore/migration. You can use overrides to include other areas of the filesystem if you want (or exclude parts included by default).

Currently OOTB limitations are that it won't automatically (re)install software from non-default repos (i.e. anything that is not in Debian or TurnKey repos) or software installed using other mechanisms (e.g. python packages installed with pip or perl packages installed by cpan). But even those can be easily worked around using the hooks mechanism.

Liraz Siri's picture

As per my comment on the GitHub issue :

One of the following solutions should work:

  1. shut down the instance and update its configuration to work with the newest version of pv-grub (currently 1.0.4):
  2. boot the upgraded instance with the Wheezy kernel by editing /boot/grub/menu.lst and setting the default kernel to 1 (the Wheezy kernel) instead of 0 (the new kernel).

    FWIW, if you failed to do this before rebooting the instance after the upgrade and for some reason you don't want to/can't upgrade pv-grub you are going to need to transfer the volume  to another instance in order to edit it. To do this detach the root volume, reattach it to another instance in the same availability zone. It will auto-mount under /media/ebs. Then edit boot/grub/menu.lst.

    To attach the instance back to the original instance, you'll want to select /dev/sda1 as the name of the device.

We're also trying to figure out if there is a way we can fix this automatically through the Hub.

Add new comment