On Thu, Dec 15, 2022 at 04:04:53AM -0500, Michael S. Tsirkin wrote:
On Thu, Dec 15, 2022 at 09:47:57AM +0100, Stefano Garzarella wrote:
On Thu, Dec 15, 2022 at 04:36:44AM +0000, Bobby Eshleman wrote:
> This commit changes virtio/vsock to use sk_buff instead of
> virtio_vsock_pkt. Beyond better conforming to other net code, using
> sk_buff allows vsock to use sk_buff-dependent features in the future
> (such as sockmap) and improves throughput.
>
> This patch introduces the following performance changes:
>
> Tool/Config: uperf w/ 64 threads, SOCK_STREAM
> Test Runs: 5, mean of results
> Before: commit 95ec6bce2a0b ("Merge branch 'net-ipa-more-endpoints'")
>
> Test: 64KB, g2h
> Before: 21.63 Gb/s
> After: 25.59 Gb/s (+18%)
>
> Test: 16B, g2h
> Before: 11.86 Mb/s
> After: 17.41 Mb/s (+46%)
>
> Test: 64KB, h2g
> Before: 2.15 Gb/s
> After: 3.6 Gb/s (+67%)
>
> Test: 16B, h2g
> Before: 14.38 Mb/s
> After: 18.43 Mb/s (+28%)
>
> Signed-off-by: Bobby Eshleman <bobby.eshleman@xxxxxxxxxxxxx>
> ---
>
> Testing: passed vsock_test h2g and g2h
> Note2: net-next is closed, but sending out now in case more comments
> roll in, as discussed here:
> https://lore.kernel.org/all/Y34SoH1nFTXXLWbK@bullseye/
>
> Changes in v8:
> - vhost/vsock: remove unused enum
> - vhost/vsock: use spin_lock_bh() instead of spin_lock()
> - vsock/loopback: use __skb_dequeue instead of skb_dequeue
>
> Changes in v7:
> - use skb_queue_empty() instead of skb_queue_empty_lockless()
>
> Changes in v6:
> - use skb->cb instead of skb->_skb_refdst
> - use lock-free __skb_queue_purge for rx_queue when rx_lock held
>
> Changes in v5:
> - last_skb instead of skb: last_hdr->len = cpu_to_le32(last_skb->len)
>
> Changes in v4:
> - vdso/bits.h -> linux/bits.h
> - add virtio_vsock_alloc_skb() helper
> - virtio/vsock: rename buf_len -> total_len
> - update last_hdr->len
> - fix build_skb() for vsockmon (tested)
> - add queue helpers
> - use spin_{unlock/lock}_bh() instead of spin_lock()/spin_unlock()
> - note: I only ran a few g2h tests to check that this change
> had no perf impact. The above data is still from patch
> v3.
>
> Changes in v3:
> - fix seqpacket bug
> - use zero in vhost_add_used(..., 0) device doesn't write to buffer
> - use xmas tree style declarations
> - vsock_hdr() -> virtio_vsock_hdr() and other include file style fixes
> - no skb merging
> - save space by not using vsock_metadata
> - use _skb_refdst instead of skb buffer space for flags
> - use skb_pull() to keep track of read bytes instead of using an an
> extra variable 'off' in the skb buffer space
> - remove unnecessary sk_allocation assignment
> - do not zero hdr needlessly
> - introduce virtio_transport_skb_len() because skb->len changes now
> - use spin_lock() directly on queue lock instead of sk_buff_head helpers
> which use spin_lock_irqsave() (e.g., skb_dequeue)
> - do not reduce buffer size to be page size divisible
> - Note: the biggest performance change came from loosening the spinlock
> variation and not reducing the buffer size.
>
> Changes in v2:
> - Use alloc_skb() directly instead of sock_alloc_send_pskb() to minimize
> uAPI changes.
> - Do not marshal errors to -ENOMEM for non-virtio implementations.
> - No longer a part of the original series
> - Some code cleanup and refactoring
> - Include performance stats
>
> drivers/vhost/vsock.c | 213 +++++-------
> include/linux/virtio_vsock.h | 126 +++++--
> net/vmw_vsock/virtio_transport.c | 149 +++------
> net/vmw_vsock/virtio_transport_common.c | 422 +++++++++++++-----------
> net/vmw_vsock/vsock_loopback.c | 51 +--
> 5 files changed, 495 insertions(+), 466 deletions(-)
>
> diff --git a/drivers/vhost/vsock.c b/drivers/vhost/vsock.c
> index 5703775af129..ee0a00432cb1 100644
> --- a/drivers/vhost/vsock.c
> +++ b/drivers/vhost/vsock.c
> @@ -51,8 +51,7 @@ struct vhost_vsock {
> struct hlist_node hash;
>
> struct vhost_work send_pkt_work;
> - spinlock_t send_pkt_list_lock;
> - struct list_head send_pkt_list; /* host->guest pending packets */
> + struct sk_buff_head send_pkt_queue; /* host->guest pending packets */
>
> atomic_t queued_replies;
>
> @@ -108,40 +107,33 @@ vhost_transport_do_send_pkt(struct vhost_vsock
> *vsock,
> vhost_disable_notify(&vsock->dev, vq);
>
> do {
> - struct virtio_vsock_pkt *pkt;
> + struct virtio_vsock_hdr *hdr;
> + size_t iov_len, payload_len;
> struct iov_iter iov_iter;
> + u32 flags_to_restore = 0;
> + struct sk_buff *skb;
> unsigned out, in;
> size_t nbytes;
> - size_t iov_len, payload_len;
> int head;
> - u32 flags_to_restore = 0;
>
> - spin_lock_bh(&vsock->send_pkt_list_lock);
> - if (list_empty(&vsock->send_pkt_list)) {
> - spin_unlock_bh(&vsock->send_pkt_list_lock);
> + spin_lock_bh(&vsock->send_pkt_queue.lock);
> + skb = __skb_dequeue(&vsock->send_pkt_queue);
> + spin_unlock_bh(&vsock->send_pkt_queue.lock);
Sorry for coming late, but I just commented in Paolo's reply.
Here I think we can directly use the new virtio_vsock_skb_dequeue().
Yea, pretty late.
And using that will prevent me from merging this since it's not in my tree yet.
We can do a cleanup patch on top.
Yep, sure!
Functionally nothing changes, so it's fine with a patch on top.
So, for this patch:
Reviewed-by: Stefano Garzarella <sgarzare@xxxxxxxxxx>
Thanks,
Stefano
_______________________________________________
Virtualization mailing list
Virtualization@xxxxxxxxxxxxxxxxxxxxxxxxxx
https://lists.linuxfoundation.org/mailman/listinfo/virtualization