Re: LVM RAID repair trouble

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 26 September 2016 at 08:54, Giuliano Procida
<giuliano.procida@gmail.com> wrote:
> The net effect is that I now have RAID volumes in an odd state with
> extracted / missing images (I think just a cosmetic issue) and which
> rebuild on every boot (or rather re-activation and use).
> The volumes take a long time to resync, especially concurrently!

I'm mostly interested in fixing my system but I had a look at the
stderr for the last repair attempt. If I had to make a guess, it would
be that lvconvert is racing against state changes in the kernel and
eventually asking dm-raid to set up an array with mismatched
expectations about whether device 0 should be flagged for rebuild or
not.

Looking through the LVM code, there is clearly an effort to clear
rebuild flags to prevent just the problem I'm having. Unfortunately,
with lvconvert exiting early these paths are presumably not taken. I
also noticed that the rebuild flag is singled out for exclusion when
generating (backup?) metadata.

I've extracted metadata from the PVs using another small utility I
wrote. The format is not quite the same as the /etc/lvm one and in
particular REBUILD status flags are NOT filtered out here.

It turns out that I do have the REBUILD flag on the rmeta_5 and rimage_5 subLVs!

Questions:
How can I clear REBUILD safely?
Can I renumber _5 to _0 as well at the same time?
Does it make sense to try the latest version of LVM user space tools
to try the final repair conversions?

--gp

_______________________________________________
linux-lvm mailing list
linux-lvm@redhat.com
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/



[Index of Archives]     [Gluster Users]     [Kernel Development]     [Linux Clusters]     [Device Mapper]     [Security]     [Bugtraq]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]

  Powered by Linux