Re: [PATCH 2/2] virtio_net: remove send completion interrupts and avoid TX queue overrun through packet drop

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Mar 24, 2011 at 10:46:49AM -0700, Shirley Ma wrote:
> On Thu, 2011-03-24 at 16:28 +0200, Michael S. Tsirkin wrote:
> > On Thu, Mar 24, 2011 at 11:00:53AM +1030, Rusty Russell wrote:
> > > > With simply removing the notify here, it does help the case when TX
> > > > overrun hits too often, for example for 1K message size, the single
> > > > TCP_STREAM performance improved from 2.xGb/s to 4.xGb/s.
> > > 
> > > OK, we'll be getting rid of the "kick on full", so please delete that on
> > > all benchmarks.
> > > 
> > > Now, does the capacity check before add_buf() still win anything?  I
> > > can't see how unless we have some weird bug.
> > > 
> > > Once we've sorted that out, we should look at the more radical change
> > > of publishing last_used and using that to intuit whether interrupts
> > > should be sent.  If we're not careful with ordering and barriers that
> > > could introduce more bugs.
> > 
> > Right. I am working on this, and trying to be careful.
> > One thing I'm in doubt about: sometimes we just want to
> > disable interrupts. Should still use flags in that case?
> > I thought that if we make the published index 0 to vq->num - 1,
> > then a special value in the index field could disable
> > interrupts completely. We could even reuse the space
> > for the flags field to stick the index in. Too complex?
> > > Anything else on the optimization agenda I've missed?
> > > 
> > > Thanks,
> > > Rusty.
> > 
> > Several other things I am looking at, wellcome cooperation:
> > 1. It's probably a good idea to update avail index
> >    immediately instead of upon kick: for RX
> >    this might help parallelism with the host.
> Is that possible to use the same idea for publishing last used idx to
> publish avail idx? Then we can save guest iowrite/exits.

Yes, but unrelated to 1 above :)

> > 2. Adding an API to add a single buffer instead of s/g,
> >    seems to help a bit.
> > 
> > 3. For TX sometimes we free a single buffer, sometimes
> >    a ton of them, which might make the transmit latency
> >    vary. It's probably a good idea to limit this,
> >    maybe free the minimal number possible to keep the device
> >    going without stops, maybe free up to MAX_SKB_FRAGS.
> 
> I am playing with it now, to collect more perf data to see what's the
> best value to free number of used buffers.

The best IMO is to keep the number of freed buffers constant
so that we have more or less identical overhead for each packet.

> > 4. If the ring is full, we now notify right after
> >    the first entry is consumed. For TX this is suboptimal,
> >    we should try delaying the interrupt on host.
> 
> > More ideas, would be nice if someone can try them out:
> > 1. We are allocating/freeing buffers for indirect descriptors.
> >    Use some kind of pool instead?
> >    And we could preformat part of the descriptor.
> > 2. I didn't have time to work on virtio2 ideas presented
> >    at the kvm forum yet, any takers?
> If I have time, I will look at this.
> 
> Thanks
> Shirley
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux