On Tue, Oct 23, 2018 at 08:23:06PM -0600, Gang He wrote: > Teigland <teigland@xxxxxxxxxx> wrote: > > On Mon, Oct 22, 2018 at 08:19:57PM -0600, Gang He wrote: > >> Process: 815 ExecStart=/usr/sbin/lvm pvscan --cache --activate ay 9:126 > > (code=exited, status=5) > >> > >> Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: Not using device > > /dev/md126 for PV qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV. > >> Oct 22 07:34:56 linux-dnetctw lvm[815]: WARNING: PV > > qG1QRz-Ivm1-QVwq-uaHV-va9w-wwXh-lIIOhV prefers device /dev/sdb2 because > > of previous preference. > >> Oct 22 07:34:56 linux-dnetctw lvm[815]: Cannot activate LVs in VG vghome > > while PVs appear on duplicate devices. > > > > I'd try disabling lvmetad, I've not been testing these with lvmetad on. > your means is, I should let the user disable lvmetad? yes > > We may need to make pvscan read both the start and end of every disk to > > handle these md 1.0 components, and I'm not sure how to do that yet > > without penalizing every pvscan. > What can we do for now? it looks there needs add more code implement this logic. Excluding component devices in global_filter is always the most direct way of solving problems like this. (I still hope to find a solution that doesn't require that.) _______________________________________________ linux-lvm mailing list linux-lvm@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-lvm read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/