TurnKey Linux Virtual Appliance Library

Repository appliance: device mapper/lvm/initramfs issue

dekers.subs's picture

Hi all,

 

 Running TurnKey Repository in a VMWare vCloud environment. I have extended the LVM and expanded the root fs to add some more disk. All of that went according to plan and everything was running fine. After an 'apt-get update' and 'apt-get upgrade', the VM refuses to boot and drops into busybox in the initramfs environment. The error provided is (paraphrased):

ALERT! /dev/disk/by-uuid/<UUID> does not exist.

I fired up lvm from the initramfs and was pleased to see that my disk devices aren't indeed gone or damaged:

(initramfs) lvm
lvm> pvscan
  PV /dev/sda2   VG turnkey   lvm2 [18.14GiB / 0   free]
  PV /dev/sdb    VG turnkey   lvm2 [64.00 GiB / 0  free]
  Total: 2 [82.14 GiB] / in use: 2 [82.14 GiB] in no VG: 0 [0   ]
lvm> vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "turnkey" using metadata type lvm2
lvm> lvscan
  inactive        '/dev/turnkey/root' [81.64 GiB] inherit
  inactive        '/dev/turnkey/swap_1' [512.00 MiB] inherit

After a quick 'lvchange -ay /dev/turnkey/root' and 'lvchange -ay /dev/turnket/swap_1' I was able to exit the initramfs and the machine booted fine.

My first thought was that something went wrong with regenerating the initramfs during the upgrade, so I re-ran update-initramfs and it completed (though complained about some casper scripts which seem to not be a problem per some google search results). Still didn't want to boot without human intervention. I figured I'd try a symlinking of /usr/share/initramfs-tools/scripts/ to /scripts which seemed to make things happier when update-initramfs was run. Still no luck on a reboot though.

The problem is clearly something with lvm/device mapper in the initramfs, but I thought it best to see if anybody else had seen/solved this before me rather than keep digging where I may be duplicating somebody else's effort.

Any thoughts or ideas for me?

Tom's picture

thanks

This seems to have happened a while after an lvextend, which went fine (including reboots).

After a wild goose chase regarding incorrect UUID's, update-grub, grub-install,  mkdevicemap ... it turned out to be fixed by this.

Many thanks for posting and sharing this.


Guest's picture

Some thoughts ;-)

Good start Deker - let me add few details to make your reboot without human intervention.

# Reinit the map and the grub file to avoid the UUID
mv /boot/grub/device.map /boot/grub/device.map.backup
grub-mkdevicemap
update-grub

# You know this part already
cd /
ln -s /usr/share/initramfs-tools/scripts
update-initramfs -u

# Just reload the box

Cheers!

Guest's picture

Thanks

Great fix!

Thanks!

Guest's picture

THX

Excellent. Thank you. Best article.

Guest's picture

No boot at all - broken grub case

Today a friend of mine had an issue with one turnkey VM and since the fix was pretty close to the above one I decided to share it here... it can serve to somebody with similar problem.

1. Boot from DEBIAN LiveCD
2. Go with Graphic Install and configure the Network
3. In the partitioning step go back to the main list of steps and start the shell. If the LVM is not already up use
    pvscan
    vgscan
    vgchange -a y
4. Verify the LV root with 'lvdisplay' (/dev/turnkey/root) and the boot device with 'fdisk -l' (/dev/sda1)
    mkdir /newroot
    mount /dev/turnkey/root /newroot
    mount /dev/sda1 /newroot/boot
5. Mount all required HW/OS components
    mount -o bind /dev /newroot/dev
    mount -t proc none /newroot/proc
    mount -t sysfs sys /newroot/sys
6. Change the root
    chroot /newroot/ /bin/bash
        CRITICAL: Since the GRUB is completely broken, reinstall it
        grub-install --force /dev/sda
7. Reinit the map and the grub file to avoid the UUID
    grub-mkdevicemap
    update-grub
8. Prepare 'scripts' directory in the place where update-initramfs is expecting it
    cd /
    ln -s /usr/share/initramfs-tools/scripts
    update-initramfs -u
9. Reboot and you are back in the game

Cheers!

Guest's picture

bug in lvm2@initramfs

Hi,

  1. backup /usr/share/initramfs-tools/scripts/local-top/lvm2
  2. edit /usr/share/initramfs-tools/scripts/local-top/lvm2
  3. Add between modprobe -q dm-mod and activate_vg "$ROOT"  this line to initialize your lvm: lvm vgchange -ayactivate_vg "$ROOT"
  4. rebuild your initramfs: update-initramfs -u
  5. rebuild ur grub cfg: update-grub

enjoy :-)

Post new comment

The content of this field is kept private and will not be shown publicly. If you have a Gravatar account, used to display your avatar.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Allowed HTML tags: <a> <p> <span> <div> <h1> <h2> <h3> <h4> <h5> <h6> <img> <map> <area> <hr> <br> <br /> <ul> <ol> <li> <dl> <dt> <dd> <table> <tr> <td> <em> <b> <u> <i> <strong> <font> <del> <ins> <sub> <sup> <quote> <blockquote> <pre> <address> <code> <cite> <strike> <caption>

More information about formatting options

Leave this field empty. It's part of a security mechanism.
(Dear spammers: moderators are notified of all new posts. Spam is deleted immediately)