Re: [PATCH] Changes to support Veritas HyperScale (VxHS) block device protocol with qemu-kvm

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Wed, Jan 04, 2017 at 04:26:58PM -0800, ashish mittal wrote:
> On Wed, Jan 4, 2017 at 7:00 AM, John Ferlan <jferlan@xxxxxxxxxx> wrote:
> >
> > [...]
> >
> >>>> We don't anticipate a need for this.
> >>>>
> >>>>>  4. There's no VxHS Storage Pool support in this patch (OK actually an
> >>>>> additional set of patches in order to support). That would be expected
> >>>>> especially for a networked storage environment. You'll note there are
> >>>>> src/storage/storage_backend_{iscsi|gluster|rbd}.{c|h} files that manage
> >>>>> iSCSI, Gluster, and RBD protocol specific things. For starters, create,
> >>>>> delete, and refresh - especially things that a stat() wouldn't
> >>>>> necessarily return (capacity, allocation, type a/k/a target.format).
> >>>>> Perhaps even the ability to upload/download and wipe volumes in the
> >>>>> pool. Having a pool is a bit more work, but look at the genesis of
> >>>>> existing storage_backend_*.c files to get a sense for what needs to change.
> >>>>>
> >>>>
> >>>> VxHS does not need the Storage Pool functionality. Do we still need to
> >>>> implement this?
> >>>>
> >>>
> >>> It's something that's expected.  See my reasoning above.
> >>>
> >>
> >> Some explanation is in order -
> >>
> >> HyperScale is not designed to be used as a stand-alone independent
> >> storage. It is designed only to be used in the OpenStack environment
> >> with all the related Cinder/Nova changes in place. Therefore, we do
> >> not have a need for most of the above-mentioned functions from
> >> libvirt/qemu.
> >>
> >> Even in the OpenStack environment, we do not support creating storage
> >> volumes independent of a guest VM. A VxHS storage volume can only be
> >> created for a particular guest VM. With this scheme, a user does not
> >> have to manage storage pools separately. VxHS automatically configures
> >> and consumes the direct attached SSD/HDD on the OpenStack compute when
> >> enabled. After this, all the requests to add storage to guest VMs are
> >> forwarded by OpenStack directly to the hyperscale daemon on the
> >> compute, and it takes care of creating the underlying storage etc.
> >
> > And how does that volume creation occur? It would seem that there's some
> > command that does that. That's "independent" of libvirt, so in order for
> > a guest to use that storage - it'd need to be created first anyway. I
> > then assume as part of guest vm destruction you must somehow destroy the
> > volume too. Synchronizing with creation and ensuring deletion would seem
> > to be fairly important tasks and would seemingly be something you'd like
> > to have more integrated with the environment.
> >
> 
> We have hooks in nova that trap volume creation/deletion request. Then
> we send messages to our service running on every compute to carry out
> the necessary steps to create/delete the volume.
> 
> > So what about hotplug? Can someone add in VxHS storage to a guest after
> > it's started?
> >
> 
> Yes, we do support hot-plugging. We use OpenStack nova framework to
> generate a config for the new volume and attach_device it to a running
> guest.
> 
> > And migration?  Can the guest be migrated? I haven't crawled through
> > that code recently, but I know there'd be changes to either allow or not
> > based on the storage type.
> >
> 
> We do support storage migration. We simulate shared storage on top of
> direct-attached storage, therefore libvirt/qemu can assume shared
> storage for the purpose of migration.
> 
> >>
> >> The role of libvirt is limited to opening the volume specified in the
> >> guest XML file. Volume creation, deletion etc is done by the VxHS
> >> daemon in response to messages from the OpenStack controller. Our
> >> OpenStack orchestration code (Nova) is responsible for updating the
> >> guest XML with correct volume information for libvirt to use. A
> >> regular user (libvirt) is not expected to know about what volume IDs
> >> exist on any given host. A regular user also does not have a volume
> >> device node on the local compute to query. The only way to get to a
> >> volume is via the network using the server IP-port address.
> >>
> >>
> >
> > For me having the storage pool is a more complete solution. You don't
> > have to support all the storage backend volume functions (build, create,
> > upload, download, wipe, resize, etc.), but knowing what volumes exist
> > and can be used for a domain is nice.  It's also nice to know how much
> > storage exists (allocation, capacity, physical). It also allows a single
> > point of authentication - the pool authenticates at startup (see the
> > iSCSI and RBD code) and then the domains can use storage from the pool.
> >
> > From a guest view point, rather than having to provide:
> >
> >   <source protocol='vxhs' name='eb90327c-8302-4725-9e1b-4e85ed4dc251'>
> >     <host name='192.168.0.1' port='9999'/>
> >   </source>
> >   <auth username='user'>
> >     <secret type='vxfs' usage='somestring'/>
> >   </auth>
> >
> > you'd have:
> >
> >   <source pool='pool-name' volume='vol-name'/>
> >
> > The "host" and "auth" would be part of the <pool> definition. Having to
> > discover the available 'names' ends up being the difference it seems for
> > vxhs since your model would seem to be create storage, create/start
> > guest, destroy guest, destroy storage. I think there's value in being
> > able to use the libvirt storage API's especially w/r/t integrating the
> > volume and guest management.
> >
> 
> Would it be OK to consider adding storage pool functionality to vxhs
> after Nova starts using it?

>From the libvirt POV there's no requirement to have storage pool support
merged immediately. It is fine to just have the QEMU integration done in
a first patch and storage pool stuff as a followon patch.

Regards,
Daniel
-- 
|: http://berrange.com      -o-    http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org              -o-             http://virt-manager.org :|
|: http://entangle-photo.org       -o-    http://search.cpan.org/~danberr/ :|

--
libvir-list mailing list
libvir-list@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/libvir-list



[Index of Archives]     [Virt Tools]     [Libvirt Users]     [Lib OS Info]     [Fedora Users]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]     [Fedora Tools]
  Powered by Linux