On Sat, 26 Mar 2022 15:06:40 -0600 Jens Axboe wrote: > On 3/26/22 2:57 PM, Jens Axboe wrote: > >> I'd also like to have a conversation about continuing to use > >> the socket as a proxy for NAPI_ID, NAPI_ID is exposed to user > >> space now. io_uring being a new interface I wonder if it's not > >> better to let the user specify the request parameters directly. > > > > Definitely open to something that makes more sense, given we don't > > have to shoehorn things through the regular API for NAPI with > > io_uring. > > The most appropriate is probably to add a way to get/set NAPI settings > on a per-io_uring basis, eg through io_uring_register(2). It's a bit > more difficult if they have to be per-socket, as the polling happens off > what would normally be the event wait path. > > What did you have in mind? Not sure I fully comprehend what the current code does. IIUC it uses the socket and the caches its napi_id, presumably because it doesn't want to hold a reference on the socket? This may give the user a false impression that the polling follows the socket. NAPIs may get reshuffled underneath on pretty random reconfiguration / recovery events (random == driver dependent). I'm not entirely clear how the thing is supposed to be used with TCP socket, as from a quick grep it appears that listening sockets don't get napi_id marked at all. The commit mentions a UDP benchmark, Olivier can you point me to more info on the use case? I'm mostly familiar with NAPI busy poll with XDP sockets, where it's pretty obvious. My immediate reaction is that we should either explicitly call out NAPI instances by id in uAPI, or make sure we follow the socket in every case. Also we can probably figure out an easy way of avoiding the hash table lookups and cache a pointer to the NAPI struct. In any case, let's look in detail on Monday :)