Re: [PATCH v3 3/6] vbus: add a "vbus-proxy" bus model for vbus_driver objects

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



* Anthony Liguori <anthony@xxxxxxxxxxxxx> wrote:

> Ingo Molnar wrote:
>> * Gregory Haskins <ghaskins@xxxxxxxxxx> wrote:
>>
>>   
>>> This will generally be used for hypervisors to publish any host-side
>>> virtual devices up to a guest.  The guest will have the opportunity
>>> to consume any devices present on the vbus-proxy as if they were
>>> platform devices, similar to existing buses like PCI.
>>>
>>> Signed-off-by: Gregory Haskins <ghaskins@xxxxxxxxxx>
>>> ---
>>>
>>>  MAINTAINERS                 |    6 ++
>>>  arch/x86/Kconfig            |    2 +
>>>  drivers/Makefile            |    1  drivers/vbus/Kconfig        |   
>>> 14 ++++
>>>  drivers/vbus/Makefile       |    3 +
>>>  drivers/vbus/bus-proxy.c    |  152 +++++++++++++++++++++++++++++++++++++++++++
>>>  include/linux/vbus_driver.h |   73 +++++++++++++++++++++
>>>  7 files changed, 251 insertions(+), 0 deletions(-)
>>>  create mode 100644 drivers/vbus/Kconfig
>>>  create mode 100644 drivers/vbus/Makefile
>>>  create mode 100644 drivers/vbus/bus-proxy.c
>>>  create mode 100644 include/linux/vbus_driver.h
>>>     
>>
>> Is there a consensus on this with the KVM folks? (i've added the KVM  
>> list to the Cc:)
>   
> I'll let Avi comment about it from a KVM perspective but from a 
> QEMU perspective, I don't think we want to support two paravirtual 
> IO frameworks.  I'd like to see them converge.  Since there's an 
> install base of guests today with virtio drivers, there really 
> ought to be a compelling reason to change the virtio ABI in a 
> non-backwards compatible way.  This means convergence really ought 
> to be adding features to virtio.

I agree.

While different paravirt drivers are inevitable for things that are 
externally constrained (say support different hypervisors), doing 
different _Linux internal_ paravirt drivers looks plain stupid and 
counter-productive. It splits testing and development.

So either the vbus code replaces virtio (for technical merits such 
as performance and other details), or virtio is enhanced with the 
vbus performance enhancements.

> On paper, I don't think vbus really has any features over virtio.  
> vbus does things in different ways (paravirtual bus vs. pci for 
> discovery) but I think we're happy with how virtio does things 
> today.
>
> I think the reason vbus gets better performance for networking 
> today is that vbus' backends are in the kernel while virtio's 
> backends are currently in userspace.  Since Michael has a 
> functioning in-kernel backend for virtio-net now, I suspect we're 
> weeks (maybe days) away from performance results.  My expectation 
> is that vhost + virtio-net will be as good as venet + vbus.  If 
> that's the case, then I don't see any reason to adopt vbus unless 
> Greg things there are other compelling features over virtio.

Keeping virtio's backend in user-space was rather stupid IMHO. 

Having the _option_ to piggyback to user-space (for flexibility, 
extensibility, etc.) is OK, but not having kernel acceleration is 
bad.

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux