Hi,
On 04/06/2010 06:29 PM, David Lehman wrote:
On Tue, 2010-04-06 at 10:58 -0400, Peter Jones wrote:
On 04/06/2010 06:06 AM, Hans de Goede wrote:
Hi,
On 04/06/2010 11:54 AM, Ales Kozumplik wrote:
On 03/30/2010 02:30 PM, Hans de Goede wrote:
Hi All,
While doing what should be testing a simple iscsi related patch,
I encountered the following issue:
Take a system with a single disk, sda, which has a /boot on
sda1 and a PV on sda2. This PV is the PV for the 1 PV VG:
VolGroup, which contains LV's lv_swap, lv_root and lv_home.
"Attach" an iscsi disk to this system, which becomes sdb,
which has a /boot on sdb1 and a PV on sdb2. This PV is the PV
for the 1 PV VG: VolGroup, which contains LV's lv_swap and
lv_root.
Notice that:
1) The 2 VG's have the same name
2) Only sda has a lv_home LV.
Now in the filter UI select only disk sdb to install to, then
the following may (depending on scanning order) happen:
Assume sdb gets scanned first by devicetree.py:
- when scanning sdb2, handleUdevLVMPVFormat() will
call "lvm lvchange -ay" for all LV's in this VG
(as seen by udev, more on that later).
- at this point, sda has not been scanned yet, so
isIgnored has not been called for sda2 yet, and thus
lvm_cc_addFilterRejectRegexp("sda2") has not been called
yet.
- thus lvm lvchange sees both sda2 and sdb2, it complains
that there are 2 identically named VG's and picks the one
using the sda2 PV.
Maybe we should stop the installation at this point and tell the user
that he named two VGs the same and needs to address this before
proceeding with the installation? Because otherwise we will need to do
too many changes for a corner case that only occurs infrequently. And we
still won't be completely happy with them.
That won't work, as there actually are no duplicate VG's when looking only
at the devices the user selected in the filter UI, the problem is
that lvm at this point does not honor what we've selected in the filter UI
and what not. Which is caused by the way we build the ignore these devices
cmdline argument for lvm.
Perhaps we should be generating an lvm.conf with a proper filter section for
this instead? It's not really an ideal solution :/
It might be worth passing lvm a full list of the devices it is allowed
to look at instead of telling it which devices to ignore.
I've been thinking in that direction too, and I like it, but ...
We will know
the full list of PVs at activation time.
Do we? Currently we activate LV's from handleUdevLVMPVFormat()
when we find the first PV.
But if you've ideas to change this, I'm all ears. We could delay
bringing up the LV's until the VG actually has pv_count parents,
but this still won't save us from duplicate VG name issues.
Hmm, but if we were to use the VG's UUID as name in the device
tree then this could work. This would probably require quite a bit
of reworking of the code though, as I think there are assumptions
that devicetree name == VG name in quite a few places.
Note that this problem is less obscure then it seems, it can
be triggered by this PITA called software RAID. If we have a PV on
a software RAID mirror, then lvm will see the PV 3 times, and semi randomly
(or so it seems) pick one (*). So we really must make sure
we've scanned all possible lower level devices before activating
LV's. I've been thinking about this, and a patch for this should not
be all that invasive.
*) Although we could simply not care in this case, as at the end of
populating the tree we tear down everything, and the next activation
the ignore list will be ok and the right PV will get used.
Regards,
Hans
_______________________________________________
Anaconda-devel-list mailing list
Anaconda-devel-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/anaconda-devel-list