Re: [PATCH] virtio-ring: Use threshold for switching to indirect descriptors

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, Dec 01, 2011 at 01:12:25PM +1030, Rusty Russell wrote:
> On Wed, 30 Nov 2011 18:11:51 +0200, Sasha Levin <levinsasha928@xxxxxxxxx> wrote:
> > On Tue, 2011-11-29 at 16:58 +0200, Avi Kivity wrote:
> > > On 11/29/2011 04:54 PM, Michael S. Tsirkin wrote:
> > > > > 
> > > > > Which is actually strange, weren't indirect buffers introduced to make
> > > > > the performance *better*? From what I see it's pretty much the
> > > > > same/worse for virtio-blk.
> > > >
> > > > I know they were introduced to allow adding very large bufs.
> > > > See 9fa29b9df32ba4db055f3977933cd0c1b8fe67cd
> > > > Mark, you wrote the patch, could you tell us which workloads
> > > > benefit the most from indirect bufs?
> > > >
> > > 
> > > Indirects are really for block devices with many spindles, since there
> > > the limiting factor is the number of requests in flight.  Network
> > > interfaces are limited by bandwidth, it's better to increase the ring
> > > size and use direct buffers there (so the ring size more or less
> > > corresponds to the buffer size).
> > > 
> > 
> > I did some testing of indirect descriptors under different workloads.
> 
> MST and I discussed getting clever with dynamic limits ages ago, but it
> was down low on the TODO list.  Thanks for diving into this...
> 
> AFAICT, if the ring never fills, direct is optimal.  When the ring
> fills, indirect is optimal (we're better to queue now than later).
> 
> Why not something simple, like a threshold which drops every time we
> fill the ring?
> 
> struct vring_virtqueue
> {
> ...
>         int indirect_thresh;
> ...
> }
> 
> virtqueue_add_buf_gfp()
> {
> ...
> 
>         if (vq->indirect &&
>             (vq->vring.num - vq->num_free) + out + in > vq->indirect_thresh)
>                 return indirect()
> ...
> 
> 	if (vq->num_free < out + in) {
>                 if (vq->indirect && vq->indirect_thresh > 0)
>                         vq->indirect_thresh--;
>         
> ...
> }
> 
> Too dumb?
> 
> Cheers,
> Rusty.

We'll presumably need some logic to increment is back,
to account for random workload changes.
Something like slow start?

-- 
MST
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[Index of Archives]     [KVM ARM]     [KVM ia64]     [KVM ppc]     [Virtualization Tools]     [Spice Development]     [Libvirt]     [Libvirt Users]     [Linux USB Devel]     [Linux Audio Users]     [Yosemite Questions]     [Linux Kernel]     [Linux SCSI]     [XFree86]
  Powered by Linux