Seem to be in trouble

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



So I have been converting my Suse 9.1 fileserver to use lvm2. (2.6.4)
Seems I have have run into trouble and not sure what to do to get out of it.

All of my physical volumes are Raid 1 mirros.  (/dev/md0, /dev/md1)

I was just converting /dev/md1 to a PV.
I was then moving all of the PE's in /dev/md0 to /dev/md1.
Just about the point that it finished, the machine hung.


I could log in, but then just about everything I could do with the filessystem would cause a process to hang.
So after about 10 minutes, I rebooted the machine. Then I got a whole bunch of errors and the fsck was being run on all the filesystems.
Then I got an oops. So I rebooted without those disks and upgraded the kernel to 2.6.10.


Things seem to be mostly ok now except:
One of my logical volumes is messed up.
I have a new logical volume called /dev/system/pvmove0 which is huge.
I don't want to delete it in case I can use it to recover.

When I run pvmove, it detects the move in progress, but says there are problems:
# pvmove
Number of segments in active LV pvmove0 does not match metadata
Number of segments in active LV pvmove0 does not match metadata
ABORTING: Mirror percentage check failed.


Any ideas on what to do now?
Is there anyway to recover?

Nicholas

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux