Re: Can't work normally after attaching disk volumes originally in a VG on another machine

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26 March 2018 at 08:04, Gang He <ghe@suse.com> wrote:
>> Gang He schreef op 23-03-2018 9:30:
>>
>>> 6) attach disk2 to VM2(tb0307-nd2), the vg on VM2 looks abnormal.
>>> tb0307-nd2:~ # pvs
>>>   WARNING: Device for PV JJOL4H-kc0j-jyTD-LDwl-71FZ-dHKM-YoFtNV not
>>> found or rejected by a filter.
>>>   PV         VG  Fmt  Attr PSize  PFree
>>>   /dev/vdc   vg2 lvm2 a--  20.00g 20.00g
>>>   /dev/vdd   vg1 lvm2 a--  20.00g 20.00g
>>>   [unknown]  vg1 lvm2 a-m  20.00g 20.00g
>>
>> This is normal because /dev/vdd contains metadata for vg1 which includes
>> now missing disk /dev/vdc      .... as the PV is no longer the same.
>
> It looks like each PV includes a copy meta data for VG, but if some PV has changed (e.g. removed, or moved to another VG),
> the remained PV should have a method to check the integrity when each startup (activated?), to avoid such inconsistent problem automatically.

Your workflow is strange. What are you trying to accomplish here?

Your steps in 5 should be:

vgreduce vg01 /dev/vdc /dev/vdc
pvremove /dev/vdc /dev/vdd

That way you ensure there's no leftover metadata in the PVs (specially
if you need to attach those disks to a different system)

Again a usecase to understand your workflow would be beneficial...

Cheers

Fran

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux