On Wed, 9 Dec 2020 15:16:59 -0800 Jakub Kicinski <kuba@xxxxxxxxxx> wrote: > On Tue, 8 Dec 2020 10:45:29 +0100 SeongJae Park wrote: > > From: SeongJae Park <sjpark@xxxxxxxxx> > > > > In 'fqdir_exit()', a work for destruction of the 'fqdir' is enqueued. > > The work function, 'fqdir_work_fn()', calls 'rcu_barrier()'. In case of > > intensive 'fqdir_exit()' (e.g., frequent 'unshare(CLONE_NEWNET)' > > systemcalls), this increased contention could result in unacceptably > > high latency of 'rcu_barrier()'. This commit avoids such contention by > > doing the destruction in batched manner, as similar to that of > > 'cleanup_net()'. > > > > Signed-off-by: SeongJae Park <sjpark@xxxxxxxxx> > > Looks fine to me, but you haven't CCed Florian or Eric who where the > last two people to touch this function. Please repost CCing them and > fixing the nit below, thanks! Thank you for let me know that. I will send the next version so. > > > static void fqdir_work_fn(struct work_struct *work) > > { > > - struct fqdir *fqdir = container_of(work, struct fqdir, destroy_work); > > - struct inet_frags *f = fqdir->f; > > + struct llist_node *kill_list; > > + struct fqdir *fqdir; > > + struct inet_frags *f; > > nit: reorder fqdir and f to keep reverse xmas tree variable ordering. Hehe, ok, I will. :) Thanks, SeongJae Park