How to trash a broke VG

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



OK Chaps I've broken it.

I have a VG containing one LV and made up of 3 live disks and 2 failed disks.

Whilst the disks were failing I attempted to move date off the failing
disks, which failed so I now have a pvmove0 that won't go away either.


So if I attempt to even remove a live disk I get an error.

[root@nas ~]# vgreduce -v vg_backup /dev/sdi1
    Using physical volume(s) on command line.
    Wiping cache of LVM-capable devices
    Wiping internal VG cache
  Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
  Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
    There are 2 physical volumes missing.
  Cannot change VG vg_backup while PVs are missing.
  Consider vgreduce --removemissing.
    There are 2 physical volumes missing.
  Cannot process volume group vg_backup
  Failed to find physical volume "/dev/sdi1".

Then if I attempt a vgreduce --removemissing I get

[root@nas ~]# vgreduce --removemissing vg_backup
  Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
  Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
  WARNING: Partial LV lv_backup needs to be repaired or removed.
  WARNING: Partial LV pvmove0 needs to be repaired or removed.
  There are still partial LVs in VG vg_backup.
  To remove them unconditionally use: vgreduce --removemissing --force.
  Proceeding to remove empty missing PVs.

So I try force
[root@nas ~]# vgreduce --removemissing --force vg_backup
  Couldn't find device with uuid eFMoUW-6Ml5-fTyn-E2cT-sFXu-kYla-MmiAUV.
  Couldn't find device with uuid LxXhsb-Mgag-ESXZ-fPP6-52dE-iNp2-uKdLwu.
  Removing partial LV lv_backup.
  Can't remove locked LV lv_backup.

So no go.

If I try lvremove pvmove0

[root@nas ~]# lvremove -v pvmove0
    Using logical volume(s) on command line.
    VG name on command line not found in list of VGs: pvmove0
    Wiping cache of LVM-capable devices
  Volume group "pvmove0" not found
  Cannot process volume group pvmove0

So Heeelp I seem to be caught in some kind of loop.

_______________________________________________
linux-lvm mailing list
linux-lvm@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux