PV segments corrupted in vg1 : LVM corrupted

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



After removing some logical volumes (backup,etc) and extending home
logical volume, data in the whole volume group (vg1) has become
corrupted: Root logical volume is mounted at boot time, but am unable
to mount any other logical volumes in that volume group (var,home etc)
after reboot.

pvscan displays :  PV segment VG free_count mismatch: 0 != 4294966485
  Internal error: PV segments corrupted in vg1.
  No matching physical volumes found

vgdisplay, vgchange,vgscan display similiar message.

Using 2 IDE drives, with 2 meta disks on each (mirrored across the drives).
1 meta disk contains boot
1 meta disk contains  LVM - 1 volume group, logical volumes for
root,swap,var,usr,home

vgcfgrestore fails.

/etc/lvm/backup/vg1 contains (english) description of last known state of the 
logical volumes.  
/etc/lvmconf/vg1.conf  is a binary file (may be out of date)
/etc/lvmtab.d/vg1 appears to be the same binary file above (may be out of date)

System is Debian Unstable, running lvm2,udev,device mapper, kernel
2.6.8 compiled from Debian source. System appeared stable for 1 week
after lvm changes, but failed on reboot.

1. What command can be used to recreate the vg1.conf binary file from
the english description backup? (vg1.conf modification date is older
than changes)

2. What is the VG free count and how can it be reset to 0?

3. What other tools available to fix broken lvm.

Thank you

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux