On Fri, 2010-10-29 at 10:10 +0200, Michael S. Tsirkin wrote: > Hmm. I don't yet understand. We are still doing copies into the per-vq > buffer, and the data copied is really small. Is it about cache line > bounces? Could you try figuring it out? per-vq buffer is much less expensive than 3 put_copy() call. I will collect the profiling data to show that. > > > 2. How about flushing out queued stuff before we exit > > > the handle_tx loop? That would address most of > > > the spec issue. > > > > The performance is almost as same as the previous patch. I will > resubmit > > the modified one, adding vhost_add_used_and_signal_n after handle_tx > > loop for processing pending queue. > > > > This patch was a part of modified macvtap zero copy which I haven't > > submitted yet. I found this helped vhost TX in general. This pending > > queue will be used by DMA done later, so I put it in vq instead of a > > local variable in handle_tx. > > > > Thanks > > Shirley > > BTW why do we need another array? Isn't heads field exactly what we > need > here? head field is only for up to 32, the more used buffers add and signal accumulated the better performance is from test results. That's was one of the reason I didn't use heads. The other reason was I used these buffer for pending dma done in mavctap zero copy patch. It could be up to vq->num in worse case. Thanks Shirley -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html