Since we have lots of performance discussions about virtio_net and vhost communication. I think it's better to have a common understandings of the code first, then we can seek the right directions to improve it. We also need to collect more statistics data on both virtio and vhost. Let's look at TX first: from virtio_net(guest) to vhost(host), send vq is shared between guest virtio_net and host vhost, it uses memory barriers to sync the changes. In the start: Guest virtio_net TX send completion interrupt (for freeing used skbs) is disable. Guest virtio_net TX send completion interrupt is enabled only when send vq is overrun, guest needs to wait vhost to consume more available skbs. Host vhost notification is enabled in the beginning (for consuming available skbs); It is disable whenever the send vq is not empty. Once the send vq is empty, the notification is enabled by vhost. In guest start_xmit(), it first frees used skbs, then send available skbs to vhost, ideally guest never enables TX send completion interrupts to free used skbs if vhost keeps posting used skbs in send vq. In vhost handle_tx(), it wakes up by guest whenever the send vq has a skb to send, once the send vq is not empty, vhost exits handle_tx() without enabling notification. Ideally if guest keeps xmit skbs in send vq, the notification is never enabled. I don't see issues on this implementation. However, in our TCP_STREAM small message size test, we found that somehow guest couldn't see more used skbs to free, which caused frequently TX send queue overrun. In our TCP_RR small message size multiple streams test, we found that vhost couldn't see more xmit skbs in send vq, thus it enabled notification too often. What's the possible cause here in xmit? How guest, vhost are being scheduled? Whether it's possible, guest virtio_net cooperates with vhost for ideal performance: both guest virtio_net and vhost can be in pace with send vq without many notifications and exits? Thanks Shirley -- To unsubscribe from this list: send the line "unsubscribe kvm" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html