Re: [RFC PATCH 00/17] virtual-bus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Avi Kivity wrote:
> Gregory Haskins wrote:
>> Avi Kivity wrote:
>>  
>>> Gregory Haskins wrote:
>>>    
>>>> Rusty Russell wrote:
>>>>  
>>>>      
>>>>> On Thursday 02 April 2009 21:36:07 Gregory Haskins wrote:
>>>>>             
>>>>>> You do not need to know when the packet is copied (which I currently
>>>>>> do).  You only need it for zero-copy (of which I would like to
>>>>>> support,
>>>>>> but as I understand it there are problems with the reliability of
>>>>>> proper
>>>>>> callback (i.e. skb->destructor).
>>>>>>                     
>>>>> But if you have a UP guest,
>>>>>             
>>>> I assume you mean UP host ;)
>>>>
>>>>         
>>> I think Rusty did mean a UP guest, and without schedule-and-forget.
>>>     
>> That doesnt make sense to me, tho.  All the testing I did was a UP
>> guest, actually.  Why would I be constrained to run without the
>> scheduling unless the host was also UP?
>>   
>
> You aren't constrained.  And your numbers show it works.
>
>>>
>>> The problem is that we already have virtio guest drivers going several
>>> kernel versions back, as well as Windows drivers.  We can't keep
>>> changing the infrastructure under people's feet.
>>>     
>>
>> Well, IIUC the virtio code itself declares the ABI as unstable, so there
>> technically *is* an out if we really wanted one.  But I certainly
>> understand the desire to not change this ABI if at all possible, and
>> thus the resistance here.
>>   
>
> virtio is a stable ABI.

Dang!  Scratch that.
>
>> However, theres still the possibility we can make this work in an ABI
>> friendly way with cap-bits, or other such features.  For instance, the
>> virtio-net driver could register both with pci and vbus-proxy and
>> instantiate a device with a slightly different ops structure for each or
>> something.  Alternatively we could write a host-side shim to expose vbus
>> devices as pci devices or something like that.
>>   
>
> Sounds complicated...

Well, the first solution would be relatively trivial...at least on the
guest side.  All the other infrastructure is done and included in the
series I sent out.  The changes to the virtio-net driver on the guest
itself would be minimal.  The bigger effort would be converting
venet-tap to use virtio-ring instead of IOQ.  But this would arguably be
less work than starting a virtio-net backend module from scratch because
you would have to not only code up the entire virtio-net backend, but
also all the pci emulation and irq routing stuff that is required (and
is already done by the vbus infrastructure).  Here all the major pieces
are in place, just the xmit and rx routines need to be converted to
virtio-isms.

For the second option, I agree.  Its probably too nasty and it would be
better if there was just either a virtio-net to kvm-host hack, or a more
pci oriented version of a vbus-like framework.

That said, there is certainly nothing wrong with having an alternate
option.  There is plenty of precedent for having different drivers for
different subsystems, etc, even if there is overlap.  Heck, even KVM has
realtek, e1000, and virtio-net, etc.  Would our kvm community be willing
to work with me to get these patches merged?  I am perfectly willing to
maintain them.  That said, the general infrastructure should probably
not live in -kvm (perhaps -tip, -mm, or -next, etc is more
appropriate).  So a good plan might be to shoot for the core going into
a more general upstream tree.  When/if that happens, then the kvm
community could consider the kvm specific parts, etc.  I realize this is
all pending review acceptance by everyone involved...

-Greg



Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux