Re: [PATCH] io_uring/napi: remove duplicate io_napi_entry timeout assignation

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Mon, 2024-08-12 at 12:11 -0600, Jens Axboe wrote:
> On 8/12/24 12:10 PM, Jens Axboe wrote:
> > On 8/11/24 7:00 PM, Olivier Langlois wrote:
> > > On Sun, 2024-08-11 at 20:34 -0400, Olivier Langlois wrote:
> > > > io_napi_entry() has 2 calling sites. One of them is unlikely to
> > > > find
> > > > an
> > > > entry and if it does, the timeout should arguable not be
> > > > updated.
> > > > 
> > > > The other io_napi_entry() calling site is overwriting the
> > > > update made
> > > > by io_napi_entry() so the io_napi_entry() timeout value update
> > > > has no
> > > > or
> > > > little value and therefore is removed.
> > > > 
> > > > Signed-off-by: Olivier Langlois <olivier@xxxxxxxxxxxxxx>
> > > > ---
> > > >  io_uring/napi.c | 1 -
> > > >  1 file changed, 1 deletion(-)
> > > > 
> > > > diff --git a/io_uring/napi.c b/io_uring/napi.c
> > > > index 73c4159e8405..1de1d4d62925 100644
> > > > --- a/io_uring/napi.c
> > > > +++ b/io_uring/napi.c
> > > > @@ -26,7 +26,6 @@ static struct io_napi_entry
> > > > *io_napi_hash_find(struct hlist_head *hash_list,
> > > >  	hlist_for_each_entry_rcu(e, hash_list, node) {
> > > >  		if (e->napi_id != napi_id)
> > > >  			continue;
> > > > -		e->timeout = jiffies + NAPI_TIMEOUT;
> > > >  		return e;
> > > >  	}
> > > >  
> > > I am commenting my own patch because I found something curious
> > > that I
> > > was not sure about when I was reviewing the code.
> > > 
> > > Should the remaining e->timeout assignation be wrapped with a
> > > WRITE_ONCE() macro to ensure an atomic store?
> > 
> > I think that makes sense to do as lookup can be within rcu, and
> > hence we have nothing serializing it. Not for torn writes, but to
> > ensure that the memory sanitizer doesn't complain. I can just make
> > this change while applying, or send a v2.
> 
> As a separate patch I mean, not a v2. That part can wait until 6.12.
> 
ok. np. I'll look into it soon.

In the meantime, I have detected few suspicious things in the napi
code.

I am reporting them here to have few extra eye balls looking at them to
be sure that everything is fine or not.

1. in __io_napi_remove_stale(),

is it ok to use hash_for_each() instead of hash_for_each_safe()?

it might be ok because it is a hash_del_rcu() and not a simple
hash_del() but I have little experience with possible RCU shortcuts so
I am unsure on this one...

2. in io_napi_free()

list_del(&e->list); is not called. Can the only reason be that
io_napi_free() is called as part of the ring destruction so it is an
optimization to not clear the list since it is not expected to be
reused?

would calling INIT_LIST_HEAD() before exiting as an extra precaution to
make the function is future proof in case it is reused in another
context than the ring destruction be a good idea?

3. I am surprised to notice that in __io_napi_do_busy_loop(),
list_for_each_entry_rcu() is called to traverse the list but the
regular methods list_del() and list_add_tail() are called to update the
list instead of their RCU variant.

Is this ok?

if it is, the only plausible explanation that can think of is that it
is paired with another RCU hash table update... I guess maybe this is a
common RCU idiom that I am unaware of and this is why this is even not
derserving a small comment to address this point, I would think that
this might deserve one. This setup leaves me perplexed...






[Index of Archives]     [Linux Samsung SoC]     [Linux Rockchip SoC]     [Linux Actions SoC]     [Linux for Synopsys ARC Processors]     [Linux NFS]     [Linux NILFS]     [Linux USB Devel]     [Video for Linux]     [Linux Audio Users]     [Yosemite News]     [Linux Kernel]     [Linux SCSI]


  Powered by Linux