On Wed, Feb 02, 2011 at 10:11:51AM -0800, Shirley Ma wrote: > On Wed, 2011-02-02 at 19:32 +0200, Michael S. Tsirkin wrote: > > OK, but this should have no effect with a vhost patch > > which should ensure that we don't get an interrupt > > until the queue is at least half empty. > > Right? > > There should be some coordination between guest and vhost. What kind of coordination? With a patched vhost, and a full ring. you should get an interrupt per 100 packets. Is this what you see? And if yes, isn't the guest patch doing nothing then? > We shouldn't > count the TX packets when netif queue is enabled since next guest TX > xmit will free any used buffers in vhost. We need to be careful here in > case we miss the interrupts when netif queue has stopped. > > However we can't change old guest so we can test the patches separately > for guest only, vhost only, and the combination. > > > > > > > > > Yes, it seems unrelated to tx interrupts. > > > > > > The issue is more likely related to latency. > > > > Could be. Why do you think so? > > Since I played with latency hack, I can see performance difference for > different latency. Which hack was that? > > > Do you have anything in > > > mind on how to reduce vhost latency? > > > > > > Thanks > > > Shirley > > > > Hmm, bypassing the bridge might help a bit. > > Are you using tap+bridge or macvtap? > > I am using tap+bridge for TCP_RR test, I think Steven tested macvtap > before. He might have some data from his workload performance > measurement. > > Shirley -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html