Re: [PATCH 0/5] Add vhost-blk support

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Tue, Jul 17, 2012 at 12:56:31PM +0200, Paolo Bonzini wrote:
> Il 17/07/2012 12:49, Michael S. Tsirkin ha scritto:
> >> Ok, that would make more sense.  One difference between vhost-blk and
> >> vhost-net is that for vhost-blk there are also management actions that
> >> would trigger the switch, for example a live snapshot.
> >> So a prerequisite for vhost-blk would be that it is possible to disable
> >> it on the fly while the VM is running, as soon as all in-flight I/O is
> >> completed.
> > 
> > It applies for vhost-net too. For example if you bring link down,
> > we switch to userspace. So vhost-net supports this switch on the fly.
> 
> Cool.
> 
> >> (Note that, however, this is not possible for vhost-scsi, because it
> >> really exposes different hardware to the guest.  It must not happen that
> >> a kernel upgrade or downgrade toggles between userspace SCSI and
> >> vhost-scsi, for example).
> > 
> > I would say this is not a prerequisite for merging in qemu.
> > It might be a required feature for production but it
> > is also solvable at the management level.
> 
> I'm thinking of the level interrupts here.  You cannot make a change in
> the guest, and have it do completely unrelated changes the hardware that
> the guest sees.

Absolutely.

So the right thing for vhost-scsi might be to just support level
(equivalent for "force" flag in vhost-net).
We don't in vhost-net because it is triggered by some old guests
but all virtio-scsi guests use MSI so level is just a spec compatibility
issue.

We might also gain kernel support for level at some point.

> >>>> having to
> >>>> support the API; having to handle transition from one more thing when
> >>>> something better comes out.
> >>>
> >>> Well this is true for any code. If the limited featureset which
> >>> vhost-blk can accelerate is something many people use, then accelerating
> >>> by 5-15% might outweight support costs.
> >>
> >> It is definitely what people use if they are interested in performance.
> > 
> > In that case it seems to me we should stop using the feature set as
> > an argument and focus on whether the extra code is worth the 5-15% gain.
> > No one seems to have commented on that so everyone on list thinks that
> > aspect is OK?
> 
> I would like to see a breakdown of _where_ the 5-15% lies, something
> like http://www.linux-kvm.org/page/Virtio/Block/Latency.

Yes but I think it's also nice to have. It's hard to argue IMO that
virtio as kernel interface cuts out some overhead.

> > Kernel merge windows is coming up and I would like to see whether
> > any of vhost-blk / vhost-scsi is going to be actually used by userspace.
> > I guess we could tag it for staging but would be nice to avoid that.
> 
> Staging would be fine by me for both vhost-blk and vhost-scsi.
> 
> Paolo

The reason I say staging is because there seems to be a deadlock
where userspace waits for kernel to merge a driver and
kernel does not want to commit to an ABI that will then
go unused.

So even if it gets tagged as staging it would only
make sense for it to stay there for one cycle. And then either
get removed if no userspace materializes or lose staging tag
if it does.

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux