Re: [GIT PULL] io_uring updates for 5.18-rc1

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Sat, 2022-03-26 at 14:30 -0700, Jakub Kicinski wrote:
> On Sat, 26 Mar 2022 15:06:40 -0600 Jens Axboe wrote:
> > On 3/26/22 2:57 PM, Jens Axboe wrote:
> > > > I'd also like to have a conversation about continuing to use
> > > > the socket as a proxy for NAPI_ID, NAPI_ID is exposed to user
> > > > space now. io_uring being a new interface I wonder if it's not 
> > > > better to let the user specify the request parameters
> > > > directly.  
> > > 
> > > Definitely open to something that makes more sense, given we
> > > don't
> > > have to shoehorn things through the regular API for NAPI with
> > > io_uring.  
> > 
> > The most appropriate is probably to add a way to get/set NAPI
> > settings
> > on a per-io_uring basis, eg through io_uring_register(2). It's a
> > bit
> > more difficult if they have to be per-socket, as the polling
> > happens off
> > what would normally be the event wait path.
> > 
> > What did you have in mind?
> 
> Not sure I fully comprehend what the current code does. IIUC it uses
> the socket and the caches its napi_id, presumably because it doesn't
> want to hold a reference on the socket?
> 
> This may give the user a false impression that the polling follows 
> the socket. NAPIs may get reshuffled underneath on pretty random
> reconfiguration / recovery events (random == driver dependent).
> 
> I'm not entirely clear how the thing is supposed to be used with TCP
> socket, as from a quick grep it appears that listening sockets don't
> get napi_id marked at all.
> 
> The commit mentions a UDP benchmark, Olivier can you point me to more
> info on the use case? I'm mostly familiar with NAPI busy poll with
> XDP
> sockets, where it's pretty obvious.
> 
> My immediate reaction is that we should either explicitly call out
> NAPI
> instances by id in uAPI, or make sure we follow the socket in every
> case. Also we can probably figure out an easy way of avoiding the
> hash
> table lookups and cache a pointer to the NAPI struct.
> 
> In any case, let's look in detail on Monday :)

On second reading of this email, my understanding of it has become
clearer.

1. epoll design is exposed to the same NAPIs reshuffling and it seems
to be resilient to that event
2. By assuming the NAPI functions manage well the case where they are
passed an expired NAPI id, io_uring integration would rapidly drop the
expired ids and start using the new ones.
3. With my igb nic, after 2 months of non-stop usage, I have never ever
seen napi ids change. The id values appear to solely depend on
CONFIG_NR_CPUS value
4. If random napi ids reshuffling is a thing, this kinda eliminate the
option of skipping the hash lookup by storing directly the pointer.
With the reshuffling thing, storing a pointer seems like a dangerous
proposal...

Greetings,



[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux