On Sun, 2011-12-04 at 18:22 +0200, Michael S. Tsirkin wrote: > On Sun, Dec 04, 2011 at 02:13:51PM +0200, Sasha Levin wrote: > > On Sun, 2011-12-04 at 13:52 +0200, Avi Kivity wrote: > > > On 12/03/2011 01:50 PM, Sasha Levin wrote: > > > > On Fri, 2011-12-02 at 11:16 +1030, Rusty Russell wrote: > > > > > On Thu, 1 Dec 2011 12:26:42 +0200, "Michael S. Tsirkin" <mst@xxxxxxxxxx> wrote: > > > > > > On Thu, Dec 01, 2011 at 10:09:37AM +0200, Sasha Levin wrote: > > > > > > > On Thu, 2011-12-01 at 09:58 +0200, Michael S. Tsirkin wrote: > > > > > > > > We'll presumably need some logic to increment is back, > > > > > > > > to account for random workload changes. > > > > > > > > Something like slow start? > > > > > > > > > > > > > > We can increment it each time the queue was less than 10% full, it > > > > > > > should act like slow start, no? > > > > > > > > > > > > No, we really shouldn't get an empty ring as long as things behave > > > > > > well. What I meant is something like: > > > > > > > > > > I was thinking of the network output case, but you're right. We need to > > > > > distinguish between usually full (eg. virtio-net input) and usually > > > > > empty (eg. virtio-net output). > > > > > > > > > > The signal for "we to pack more into the ring" is different. We could > > > > > use some hacky heuristic like "out == 0" but I'd rather make it explicit > > > > > when we set up the virtqueue. > > > > > > > > > > Our other alternative, moving the logic to the driver, is worse. > > > > > > > > > > As to fading the effect over time, that's harder. We have to deplete > > > > > the ring quite a few times before it turns into always-indirect. We > > > > > could back off every time the ring is totally idle, but that may hurt > > > > > bursty traffic. Let's try simple first? > > > > > > > > I tried to take a different approach, and tried putting the indirect > > > > descriptors in a kmem_cache as Michael suggested. The benchmarks showed > > > > that this way virtio-net actually worked faster with indirect on even in > > > > a single stream. > > > > > > How much better? > > > > host->guest was same with both indirect on and off, and guest->host went > > up by 5% with indirect on. > > > > This was just a simple 1 TCP stream test. > > I'm confused. didn't you see a bigger benefit for guest->host by > switching indirect off? The 5% improvement is over the 'regular' indirect on, not over indirect off. Sorry for the confusion there. I suggested this change regardless of the outcome of indirect descriptor threshold discussion, since it would help anyways. -- Sasha. -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html