Re: LVM label lost / system does not boot

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Thanks to a link from Ger (and a lot of googling) I am a couple of steps ahead:
I was able to recover the Volume Group Configuration and use it to do a "vgcfgrestore", which worked (!)
next was a vgchange:
vgchange -a y
 device-mapper: resume ioctl failed: Invalid argument
 Unable to resume cmain-data (252:4)
 5 logical volume(s) in volume group "cmain" now active

dmesg showed:
device-mapper: table: 252:4: md0 too small for target: start=2390753664, len=527835136, dev_size=2918584320

I tried to mount the other volumes, but this did not work either: classical "specify file system type" (xfs for me) and then the error about bad superblock etc. xfs_check did not help either.

I went one step back and looked at the superblocks of the two old harddisks (which are not longer part of the array). I realized one difference: The Chunk Size for the "old" RAID array was 64K, for the new array this is 512K.

Could this be a reason lvm is off and looks at the wrong place?

Current situation is this: 2 old disks and 2 new disks constitute a clean(?) RAID 5 array with chunk size 512K with the problems described.
I do have the two old drives, but the file system content has changed during the step up. My guess is I cannot just hook together everything as it was and be happy. (I would probably have to zero the superblocks of the two old HDs (the ones that were in the new array) but would probably end up with the same problems).

Any ideas? or is this the point where I am in the wrong mailing list?

Thanks so far, I already learned a lot.
Andreas



On Sun, Jun 5, 2011 at 14:47, Andreas Schild <andreas@soulboarder.net> wrote:
Hi
The first part might sound like I am in the wrong group, but bear with me...
(I probably am, but I googled up and down RAID and LVM lists and I am still Âstuck):
I have aÂsoftware RAID 5 with 4 disks and LVM on top. I had one volume group with two logical volumes (for root and data).
I wanted to upgrade capacity and started by failing a drive, replacing it with a bigger one and let the RAID resync. Worked fine for the first disk. The second disk apparently worked (resynced, all looked good), but after a reboot the system hung.
After some back and forth with superblocks (on the devices, never on the array) I was able to re-assemble the array clean.
The system still does not reboot though: "Volume group "cmain" not found".

I booted a live cd, assembled the array and did a pvck on the array (/dev/md0):
"Could not find LVM label on /dev/md0"
pvdisplay /dev/md0 results in:
 No physical volume label read from /dev/md0
 Failed to read physical volume "/dev/md0"

I do not have a backup of my /etc/ and therefore no details regarding the configuration of the LVM setup (yes, I know...)
All I have of the broken system is the /boot partition with its content

Several questions arise:
- Is it possible to "reconstitute" the LVM with what I have?
- Is the RAID array really ok, or is it possibly corrupt to begin with (and the reason no LVM labels are around)?
- Should I try to reconstruct with pvcreate/vgcreate? (I shied away from any *create commands to not make things worse.)
- If all is lost, what did I do wrong and what would I need to backup for a next time?

Any ideas on how I could get the data back would greatly be appreciated. I am in way over my head, so if somebody knowledgeable tells me: "you lost, move on" would be bad, but at least would save me some time...

Thanks,
Andreas


_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux