Re: [PATCH 3/6] Introduce new XMLs to specify disk source using storage pool/vol

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 2013年01月25日 01:42, Laine Stump wrote:
> On 01/23/2013 10:24 AM, Osier Yang wrote:
>> On 2013年01月23日 22:58, Daniel P. Berrange wrote:
>>> On Wed, Jan 23, 2013 at 10:55:25PM +0800, Osier Yang wrote:
>>>> On 2013年01月23日 22:18, Daniel P. Berrange wrote:
>>>>> On Wed, Jan 23, 2013 at 07:04:35PM +0800, Osier Yang wrote:
>>>>>> With this patch, one can specify the disk source using storage
>>>>>> pool/volume like:
>>>>>>
>>>>>>     <disk type='file' device='disk'>
>>>>>>       <driver name='qemu' type='raw' cache='none'/>
>>>>>>       <source type='pool'>
>>>>>>         <volume key='/var/lib/libvirt/images/foo.img'/>
>>>>>>         <seclabel relabel='no'/>
>>>>>>       </source>
>>>>>>       <target dev='vdb' bus='virtio'/>
>>>>>>     </disk>
>>>>>>
>>>>>>     <disk type='file' device='disk'>
>>>>>>       <driver name='qemu' type='raw' cache='none'/>
>>>>>>       <source type='pool'>
>>>>>>         <volume path='/var/lib/libvirt/images/foo.img'/>
>>>>>>         <seclabel relabel='no'/>
>>>>>>       </source>
>>>>>>       <target dev='vdb' bus='virtio'/>
>>>>>>     </disk>
>>>>>>
>>>>>>     <disk type='file' device='disk'>
>>>>>>       <driver name='qemu' type='raw' cache='none'/>
>>>>>>       <source type='pool'>
>>>>>>         <pool uuid|name="$var"/>
>>>>>>         <volume name='foo.img'/>
>>>>>>         <seclabel relabel='no'/>
>>>>>>       </source>
>>>>>>       <target dev='vdb' bus='virtio'/>
>>>>>>     </disk>
>>>>>
>>>>>
>>>>> If you're going to introduce new  schema for<source>,
>>>>> then you must introduce a new disk type value.ie a
>>>>> <disk type='file'>    must always use the<source file='...'/>
>>>>> XML syntax, otherwise you cause backwards compatibility
>>>>> problems for applications
>>>>
>>>> Oh, yes. I need a v2.
>>>>
>>>>>
>>>>> What you need here is a<disk type='volume'/>    for your
>>>>> new schema.
>>>>>
>>>>
>>>> But before I make up the v2, do you see other design problem
>>>> on the set? Thanks.
>>>
>>> I'm wondering if it is really requires to allow so many different
>>> options for specifyin the pool&   volume. For<interface type='network'>
>>> we were fine simply using the 'name' and ignoring UUID. I cna't help
>>> thinking that for storage we can similarly just use the pool name and
>>> volume name
>>>
>>
>> This was my hesitating too when on the half road. But to post the RFC
>> earlier, and considering it's at least not a bad thing, as we provide
>> all the interfaces, so I went on with it.
>>
>> I think it makes no big difference if we simply use pool name and
>> volume name, but what I'm not sure is if the users will want the uuid
>> for pool, and path/key for volume (using path/key is convenient
>> as the pool is not even neccessary).
>
> (Keep in mind this is coming from a non-storage guy, so there may be
> some flaws in my logic :-)

Rather clear actually. :-)

>
> Too many ways of describing the same thing is bad, as it leads to confusion.
>
> Also, since the point of making this abstraction is to isolate the
> host-specific details of the device from the domain's configuration, I
> think allowing the file path to be specified is counter-productive
> (since it could be different from host to host, depending on where an
> NFS share is mounted, for example).

Right, I see several benefits of gluing the domain and storage:

1) the disk source (or source for other devices) can be not stable
after a system rebooting, such as the path for LUN which behind a
vHBA, and the scsi_host number. We can make them stabe via storage
pool. Such as "persistent vHBA support in storage pool".

2) As what you mentioned above: the same underlying disk source
(for all of disk of type 'file', 'block', 'dir') can have different
path on different host. Using the storage pool and volume can
avoid this.

3) Simpler config for disk of 'network' type (in storage, it's
pool of RBD, sheepdog, glusterfs type, well, we have to support
the glusterfs pool in future), those network configurations can
be taken from the pool/volume configuration. On the other hand,
this avoid the duplication.

It could implies more benifit I have not realizied yet, and thus
more work to do though.

>
> Of course this is a bit of a different situation than network devices,
> since the pool/volume must end up pointing to the same bits from all
> hosts (either the exact same bits via a different access path, or a new
> copy of the bits migrated over to a different type of storage), but in
> the end it should be possible for the disk image to be in a local
> directory on one host, accessed by NFS on another, and maybe even via
> iscsi or lvm on another - those details should all be in the pool/volume
> definitions on each host, and the guest config should just say "this
> disk's image is in pool x, volume y".

So you agreed with just using the "pool name and volume name"?

>
> --
> libvir-list mailing list
> libvir-list@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/libvir-list

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]