Daniel P. Berrange wrote:
On Tue, Jul 21, 2009 at 05:25:36PM -0400, David Allan wrote:
The following patch implements multipath pool support. It's very
basic functionality, consisting of creating a pool that contains
all the multipath devices on the host. That will cover the common
case of users who just want to discover all the available multipath
devices and assign them to guests.
It doesn't currently allow configuration of multipathing, so for
now setting the multipath configuration will have to continue to be
done as part of the host system build.
Example XML to create the pool is:
<pool type="mpath"> <name>mpath</name> <target>
<path>/dev/mapper</path> </target> </pool>
So this is in essence a 'singleton' pool, since there's only really
one of them per host. There is also no quanity of storage associated
with a mpath pool - it is simply dealing with volumes from other
pools. This falls into the same conceptual bucket as things like
DM-RAID, MD-RAID and even loopback device management.
It is a singleton pool, in that there is only one dm instance per host.
With regard to capacity, the dm devices have capacity, and their
constituent devices could be members of other pools. Can you elaborate
on what you see as the implications of those points?
The question I've never been able to satisfactorily answer myself is
whether these things(mpath,raid,loopback) should be living in the
storage pool APIs, or in the host device APIs.
I also wonder people determine the assoication between the volumes in
the mpath pool, and the volumes for each corresponding path. eg, how
do they determine that /dev/mapper/dm-4 multipath device is
associated with devices from the SCSI storage pool 'xyz'. The storage
volume APIs & XML format don't really have a way to express this
relationship.
It's not difficult to query to find out what devices are parents of a
given device, but what is the use case for finding out the pools of the
parent devices?
The host device APIs have a much more limited set of operations
(list, create, delete) but this may well be all that's needed for
things like raid/mpath/loopback devices, and with its XML format
being capability based we could add a multipath capability under
which we list the constituent paths of each device.
If we decide to implement creation and destruction of multipath devices,
I would think the node device APIs would be the place to do it.
Now, if my understanding is correct, then if multipath is active it
should automatically create multipath devices for each unique LUN on
a storage array. DM does SCSI queries to determine which block
devices are paths to the same underlying LUN.
That's basically correct, and the administrator can configure which
devices have multipath devices created.
Taking a simple iSCSI storage pool
<pool type='iscsi'> <name>virtimages</name> <source> <host
name="iscsi.example.com"/> <device path="demo-target"/> </source>
<target> <path>/dev/disk/by-path</path> </target> </pool>
this example would show you each individual block device, generating
paths under /dev/disk/by-path.
Now, we decide we want to make use of multipath for this particular
pool. We should be able to just change the target path, to point to
/dev/mpath,
<pool type='iscsi'> <name>virtimages</name> <source> <host
name="iscsi.example.com"/> <device path="demo-target"/> </source>
<target> <path>/dev/mpath</path> </target> </pool>
and have it give us back the unique multipath enabled LUNs, instead
of each individual block device.
The problem with this approach is that dm devices are not SCSI devices,
so putting them in a SCSI pool seems wrong. iSCSI pools have always
contained volumes which are iSCSI block devices, directory pools have
always had volumes which are files. We shouldn't break that assumption
unless we have a good reason. It's not impossible to do what you
describe, but I don't understand why it's a benefit.
The target element is ignored, as it is by the disk pool, but the
config code rejects the XML if it does not exist. That behavior
should obviously be cleaned up, but I think that should be done in
a separate patch, as it's really a bug in the config code, not
related to the addition of the new pool type.
The target element is not ignored by the disk pool. This is used to
form the stable device paths via virStorageBackendStablePath() for
all block device based pools.
Hmm--on my system the path I specify shows up in the pool XML, but is
unused as far as I can tell. I can hand it something totally bogus and
it doesn't complain. I think your next point is very good, though, so
I'll make the target element meaningful in the multipath case and we can
investigate the disk behavior separately.
Even for multipath, there are 3 possible directories under which you
can see LUNs, with varying plus & minuses for naming
stability/uniqueness across hosts.
That's a good point. I'll make the target element meaningful.
Dave
--
Libvir-list mailing list
Libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list