Re: [RFC PATCH 00/17] virtual-bus

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Rusty Russell wrote:
> On Wednesday 01 April 2009 22:05:39 Gregory Haskins wrote:
>   
>> Rusty Russell wrote:
>>     
>>> I could dig through the code, but I'll ask directly: what heuristic do
>>> you use for notification prevention in your venet_tap driver?
>>>       
>> I am not 100% sure I know what you mean with "notification prevention",
>> but let me take a stab at it.
>>     
>
> Good stab :)
>
>   
>> I only signal back to the guest to reclaim its skbs every 10
>> packets, or if I drain the queue, whichever comes first (note to self:
>> make this # configurable).
>>     
>
> Good stab, though I was referring to guest->host signals (I'll assume
> you use a similar scheme there).
>   
Oh, actually no.  The guest->host path only uses the "bidir napi" thing
I mentioned.  So first packet hypercalls the host immediately with no
delay, schedules my host-side "rx" thread, disables subsequent
hypercalls, and returns to the guest.  If the guest tries to send
another packet before the time it takes the host to drain all queued
skbs (in this case, 1), it will simply queue it to the ring with no
additional hypercalls.    Like typical napi ingress processing, the host
will leave hypercalls disabled until it finds the ring empty, so this
process can continue indefinitely until the host catches up.  Once fully
drained,  the host will re-enable the hypercall channel and subsequent
transmissions will repeat the original process.

In summary, infrequent transmissions will tend to have one hypercall per
packet.  Bursty transmissions will have one hypercall per burst
(starting immediately with the first packet).  In both cases, we
minimize the latency to get the first packet "out the door".

So really the only place I am using a funky heuristic is the modulus 10
operation for tx-complete going host->guest.  The rest are kind of
standard napi event mitigation techniques.

> You use a number of packets, qemu uses a timer (150usec), lguest uses a
> variable timer (starting at 500usec, dropping by 1 every time but increasing
> by 10 every time we get fewer packets than last time).
>
> So, if the guest sends two packets and stops, you'll hang indefinitely?
>   
Shouldn't, no.  The host will send tx-complete interrupts at *max* every
10 packets, but if it drains the queue before the modulus 10 expires, it
will send a tx-complete immediately, right before it re-enables
hypercalls.  So there is no hang, and there is no delay.

For reference, here is the modulus 10 signaling
(./drivers/vbus/devices/venet-tap.c, line 584):

http://git.kernel.org/?p=linux/kernel/git/ghaskins/vbus/linux-2.6.git;a=blob;f=drivers/vbus/devices/venet-tap.c;h=0ccb7ed94a1a8edd0cca269488f940f40fce20df;hb=master#l584

Here is the one that happens after the queue is fully drained (line 593)

http://git.kernel.org/?p=linux/kernel/git/ghaskins/vbus/linux-2.6.git;a=blob;f=drivers/vbus/devices/venet-tap.c;h=0ccb7ed94a1a8edd0cca269488f940f40fce20df;hb=master#l593

and finally, here is where I re-enable hypercalls (or system calls if
the driver is in userspace, etc)

http://git.kernel.org/?p=linux/kernel/git/ghaskins/vbus/linux-2.6.git;a=blob;f=drivers/vbus/devices/venet-tap.c;h=0ccb7ed94a1a8edd0cca269488f940f40fce20df;hb=master#l600

> That's why we use a timer, otherwise any mitigation scheme has this issue.
>   

I'm not sure I follow.  I don't think I need a timer at all using this
scheme, but perhaps I am missing something?

Thanks Rusty!
-Greg



Attachment: signature.asc
Description: OpenPGP digital signature


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux