Re: [PATCH net-next v2 3/4] virtio_net: Map NAPIs to queues

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 16.01.25 17:09, Joe Damato wrote:
On Thu, Jan 16, 2025 at 03:53:14PM +0800, Xuan Zhuo wrote:
On Thu, 16 Jan 2025 05:52:58 +0000, Joe Damato <jdamato@xxxxxxxxxx> wrote:
Use netif_queue_set_napi to map NAPIs to queue IDs so that the mapping
can be accessed by user apps.

$ ethtool -i ens4 | grep driver
driver: virtio_net

$ sudo ethtool -L ens4 combined 4

$ ./tools/net/ynl/pyynl/cli.py \
        --spec Documentation/netlink/specs/netdev.yaml \
        --dump queue-get --json='{"ifindex": 2}'
[{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'},
  {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'},
  {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'},
  {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'},
  {'id': 0, 'ifindex': 2, 'type': 'tx'},
  {'id': 1, 'ifindex': 2, 'type': 'tx'},
  {'id': 2, 'ifindex': 2, 'type': 'tx'},
  {'id': 3, 'ifindex': 2, 'type': 'tx'}]

Note that virtio_net has TX-only NAPIs which do not have NAPI IDs, so
the lack of 'napi-id' in the above output is expected.

Signed-off-by: Joe Damato <jdamato@xxxxxxxxxx>
---
  v2:
    - Eliminate RTNL code paths using the API Jakub introduced in patch 1
      of this v2.
    - Added virtnet_napi_disable to reduce code duplication as
      suggested by Jason Wang.

  drivers/net/virtio_net.c | 34 +++++++++++++++++++++++++++++-----
  1 file changed, 29 insertions(+), 5 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index cff18c66b54a..c6fda756dd07 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -2803,9 +2803,18 @@ static void virtnet_napi_do_enable(struct virtqueue *vq,
  	local_bh_enable();
  }

-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+static void virtnet_napi_enable(struct virtqueue *vq,
+				struct napi_struct *napi)
  {
+	struct virtnet_info *vi = vq->vdev->priv;
+	int q = vq2rxq(vq);
+	u16 curr_qs;
+
  	virtnet_napi_do_enable(vq, napi);
+
+	curr_qs = vi->curr_queue_pairs - vi->xdp_queue_pairs;
+	if (!vi->xdp_enabled || q < curr_qs)
+		netif_queue_set_napi(vi->dev, q, NETDEV_QUEUE_TYPE_RX, napi);

So what case the check of xdp_enabled is for?

Based on a previous discussion [1], the NAPIs should not be linked
for in-kernel XDP, but they _should_ be linked for XSK.

I could certainly have misread the virtio_net code (please let me
know if I've gotten it wrong, I'm not an expert), but the three
cases I have in mind are:

   - vi->xdp_enabled = false, which happens when no XDP is being
     used, so the queue number will be < vi->curr_queue_pairs.

   - vi->xdp_enabled = false, which I believe is what happens in the
     XSK case. In this case, the NAPI is linked.

   - vi->xdp_enabled = true, which I believe only happens for
     in-kernel XDP - but not XSK - and in this case, the NAPI should
     NOT be linked.

My interpretation based on [1] is that an in-kernel XDP Tx queue is a
queue that is only used if XDP is attached and is not visible to
userspace. The in-kernel XDP Tx queue is used to not load stack Tx
queues with XDP packets. IIRC fbnic has additional queues only for
XDP Tx. So for stack RX queues I would always link napi, no matter if
XDP is attached or not. I think most driver do not have in-kernel XDP
Tx queues. But I'm also not an expert.

Gerhard




[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux