RE: removing bad PVs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



The solution I ended up with was actually to add a new disk and make new PVs
with the same uuid, which wasn't exactly the solution I was going for, but
it ended up working out.  I wasn't able to get the --removemissing option in
vgreduce to work for the scenario I mentioned before.
The test scenario I was going for was an unexpected removal of a disk, where
I could remove the disk from the vg, and bring the vg back up in a
semi-working state, minus one disk/some data.

-- Matt

+--
|Matthew Plante
| University of New Hampshire
| InterOperability Lab
| Research & Development
| SMTP: maplante@iol.unh.edu
| Phone: +1-603-862-0203
+-


-----Original Message-----
From: linux-lvm-bounces@redhat.com
[mailto:linux-lvm-bounces@redhat.com]On Behalf Of Marcin Struzak
Sent: Thursday, February 09, 2006 6:26 PM
To: maplante@iol.unh.edu; LVM general discussion and development
Subject: Re:  removing bad PVs


Matthew Plante wrote:
> actually, nevermind, I figured it out :-)

Could you share with the list what you did?
Thanks!

--Marcin

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/

[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux