[kwlug-disc] LVM Recovery

Chris Irwin chris at chrisirwin.ca
Tue Jun 29 11:18:16 EDT 2010


A while back I had a RAID failure. Apparently, while the SATA protocol
itself allows hot add & remove of drives, the SATA chipset in my
server does not like it. Hot swap of a failed drive causes an instant
freeze. Even better, the remaining RAID5 array was apparently out of
sync somehow (mid-write during freeze?).

I decided that thanks to my awesome backups (many thanks to Khalid for
pushing me toward dump!) I would just rebuild the array and restore
from backup. I made my mdadm array, pvcreate, vgcreate, lvcreate, and
restore my filesystems. I know to update /etc/mdadm.conf and rebuild
my initrd so the new array can be assembled at boot. All my data is
back. Wonderful.

But the system won't boot. Turns out that much like mdadm, LVM seems
to use UUIDs to distinguish themselves. My system would not recognize
my new LVM array, and thus couldn't find root.

Does anybody know how I can tell the system to enable and use my new
LVM group at boot time? /etc/lvm seems to contain a LVM layout backup,
but nothing actually telling the system to *use* that array like mdadm
does. Should I have manually specified the same UUIDs when I created
the array?

I poked around for a while and eventually got it to work. But I
couldn't tell you the magic voodoo that actually solved the problem.

-- 
Chris Irwin
<chris at chrisirwin.ca>

P.S. Probably the worst part of the ordeal was that a drive hadn't
actually failed. I was trying to justify my purchase of four hard
drives to my significant other. I was demonstrating how the system can
cope with any drive failing without losing data. /me facepalms




More information about the kwlug-disc mailing list