On Thu, Oct 06, 2022 at 03:08:12AM -0400, Michael S. Tsirkin wrote:
On Wed, Oct 05, 2022 at 06:19:44PM -0700, Bobby Eshleman wrote:
This patch replaces the struct virtio_vsock_pkt with struct sk_buff.
Using sk_buff in vsock benefits it by a) allowing vsock to be extended
for socket-related features like sockmap, b) vsock may in the future
use other sk_buff-dependent kernel capabilities, and c) vsock shares
commonality with other socket types.
This patch is taken from the original series found here:
https://lore.kernel.org/all/cover.1660362668.git.bobby.eshleman@xxxxxxxxxxxxx/
Small-sized packet throughput improved by ~5% (from 18.53 Mb/s to 19.51
Mb/s). Tested using uperf, 16B payloads, 64 threads, 100s, averaged from
10 test runs (n=10). This improvement is likely due to packet merging.
Large-sized packet throughput decreases ~9% (from 27.25 Gb/s to 25.04
Gb/s). Tested using uperf, 64KB payloads, 64 threads, 100s, averaged
from 10 test runs (n=10).
Medium-sized packet throughput decreases ~5% (from 4.0 Gb/s to 3.81
Gb/s). Tested using uperf, 4k to 8k payload sizes picked randomly
according to normal distribution, 64 threads, 100s, averaged from 10
test runs (n=10).
It is surprizing to me that the original vsock code managed to outperform
the new one, given that to my knowledge we did not focus on optimizing it.
Yeah mee to.
From this numbers maybe the allocation cost has been reduced as it
performs better with small packets. But with medium to large packets we
perform worse, perhaps because previously we were allocating a
contiguous buffer up to 64k?
Instead alloc_skb() could allocate non-contiguous pages ? (which would
solve the problems we saw a few days ago)
@Bobby Are these numbers for guest -> host communication? Can we try the
reverse path as well?
I will review the patch in the next few days!
Thanks,
Stefano