On Mon, Jan 27, 2025 at 12:52:06PM -0500, Joe Damato wrote: > On Sun, Jan 26, 2025 at 04:04:02PM +0800, Jason Wang wrote: > > On Sat, Jan 25, 2025 at 4:19 AM Joe Damato <jdamato@xxxxxxxxxx> wrote: > > > > > > On Fri, Jan 24, 2025 at 09:14:54AM +0800, Jason Wang wrote: > > > > On Thu, Jan 23, 2025 at 10:47 AM Joe Damato <jdamato@xxxxxxxxxx> wrote: > > > > > > > > > > On Thu, Jan 23, 2025 at 10:40:43AM +0800, Jason Wang wrote: > > > > > > On Thu, Jan 23, 2025 at 1:41 AM Joe Damato <jdamato@xxxxxxxxxx> wrote: > > > > > > > > > > > > > > On Wed, Jan 22, 2025 at 02:12:46PM +0800, Jason Wang wrote: > > > > > > > > On Wed, Jan 22, 2025 at 3:11 AM Joe Damato <jdamato@xxxxxxxxxx> wrote: > > [...] > > > > > > > > > > > > > > > > > > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi) > > > > > > > > > +static void virtnet_napi_do_enable(struct virtqueue *vq, > > > > > > > > > + struct napi_struct *napi) > > > > > > > > > { > > > > > > > > > napi_enable(napi); > > > > > > > > > > > > > > > > Nit: it might be better to not have this helper to avoid a misuse of > > > > > > > > this function directly. > > > > > > > > > > > > > > Sorry, I'm probably missing something here. > > > > > > > > > > > > > > Both virtnet_napi_enable and virtnet_napi_tx_enable need the logic > > > > > > > in virtnet_napi_do_enable. > > > > > > > > > > > > > > Are you suggesting that I remove virtnet_napi_do_enable and repeat > > > > > > > the block of code in there twice (in virtnet_napi_enable and > > > > > > > virtnet_napi_tx_enable)? > > > > > > > > > > > > I think I miss something here, it looks like virtnet_napi_tx_enable() > > > > > > calls virtnet_napi_do_enable() directly. > > > > > > > > > > > > I would like to know why we don't call netif_queue_set_napi() for TX NAPI here? > > > > > > > > > > Please see both the cover letter and the commit message of the next > > > > > commit which addresses this question. > > > > > > > > > > TX-only NAPIs do not have NAPI IDs so there is nothing to map. > > > > > > > > Interesting, but I have more questions > > > > > > > > 1) why need a driver to know the NAPI implementation like this? > > > > > > I'm not sure I understand the question, but I'll try to give an > > > answer and please let me know if you have another question. > > > > > > Mapping the NAPI IDs to queue IDs is useful for applications that > > > use epoll based busy polling (which relies on the NAPI ID, see also > > > SO_INCOMING_NAPI_ID and [1]), IRQ suspension [2], and generally > > > per-NAPI configuration [3]. > > > > > > Without this code added to the driver, the user application can get > > > the NAPI ID of an incoming connection, but has no way to know which > > > queue (or NIC) that NAPI ID is associated with or to set per-NAPI > > > configuration settings. > > > > > > [1]: https://lore.kernel.org/all/20240213061652.6342-1-jdamato@xxxxxxxxxx/ > > > [2]: https://lore.kernel.org/netdev/20241109050245.191288-5-jdamato@xxxxxxxxxx/T/ > > > [3]: https://lore.kernel.org/lkml/20241011184527.16393-1-jdamato@xxxxxxxxxx/ > > > > Yes, exactly. Sorry for being unclear, what I want to ask is actually: > > > > 1) TX NAPI doesn't have a NAPI ID, this seems more like a NAPI > > implementation details which should be hidden from the driver. > > 2) If 1 is true, in the netif_queue_set_napi(), should it be better to > > add and check for whether or not NAPI has an ID and return early if it > > doesn't have one > > 3) Then driver doesn't need to know NAPI implementation details like > > NAPI stuffs? > > Sorry it just feels like this conversation is getting off track. > > This change is about mapping virtio_net RX queues to NAPI IDs to > allow for RX busy polling, per-NAPI config settings, etc. > > If you try to use netif_queue_set_napi with a TX-only NAPI, it will > set the NAPI ID to 0. > > I already addressed this in the cover letter, would you mind > carefully re-reading my cover letter and commit messages? > > If your main concern is that you want me to call > netif_queue_set_napi for TX-only NAPIs in addition to the RX NAPIs > in virtio_net, I can do that and resend an RFC. > > In that case, the output will show "0" for NAPI ID for the TX-only > NAPIs. See the commit message of patch 3 and imagine that the output > shows this instead: > > $ ./tools/net/ynl/pyynl/cli.py \ > --spec Documentation/netlink/specs/netdev.yaml \ > --dump queue-get --json='{"ifindex": 2}' > [{'id': 0, 'ifindex': 2, 'napi-id': 8289, 'type': 'rx'}, > {'id': 1, 'ifindex': 2, 'napi-id': 8290, 'type': 'rx'}, > {'id': 2, 'ifindex': 2, 'napi-id': 8291, 'type': 'rx'}, > {'id': 3, 'ifindex': 2, 'napi-id': 8292, 'type': 'rx'}, > {'id': 0, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'}, > {'id': 1, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'}, > {'id': 2, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'}, > {'id': 3, 'ifindex': 2, 'napi-id': 0, 'type': 'tx'}] > > If in the future the TX-only NAPIs get NAPI IDs, then nothing would > need to be updated in the driver and the NAPI IDs would "just work" > and appear. Actually, I missed a patch Jakub submit to net [1], which prevents dumping TX-only NAPIs. So, I think this RFC as-is (only calling netif_queue_set_napi for RX NAPIs) should be fine without changes. Please let me know. [1]: https://lore.kernel.org/netdev/20250103183207.1216004-1-kuba@xxxxxxxxxx/