On Sun, Sep 20, 2020 at 08:06:38AM -0700, Paul E. McKenney wrote: > On Fri, Sep 18, 2020 at 09:48:17PM +0200, Uladzislau Rezki (Sony) wrote: > > Recently the separate worker thread has been introduced to > > maintain the local page cache from the regular kernel context, > > instead of kvfree_rcu() contexts. That was done because a caller > > of the k[v]free_rcu() can be any context type what is a problem > > from the allocation point of view. > > > > >From the other hand, the lock-less way of obtaining a page has > > been introduced and directly injected to the k[v]free_rcu() path. > > > > Therefore it is not important anymore to use a high priority "wq" > > for the external job that used to fill a page cache ASAP when it > > was empty. > > > > Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> > > And I needed to apply the patch below to make this one pass rcutorture > scenarios SRCU-P and TREE05. Repeat by: > > tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 3 --configs "SRCU-P TREE05" --trust-make > > Without the patch below, the system hangs very early in boot. > > Please let me know if some other fix would be better. > > Thanx, Paul > > ------------------------------------------------------------------------ > > diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > index 8ce1ea4..2424e2a 100644 > --- a/kernel/rcu/tree.c > +++ b/kernel/rcu/tree.c > @@ -3481,7 +3481,8 @@ void kvfree_call_rcu(struct rcu_head *head, rcu_callback_t func) > success = kvfree_call_rcu_add_ptr_to_bulk(krcp, ptr); > if (!success) { > // Use delayed work, so we do not deadlock with rq->lock. > - if (!atomic_xchg(&krcp->work_in_progress, 1)) > + if (rcu_scheduler_active == RCU_SCHEDULER_RUNNING && > + !atomic_xchg(&krcp->work_in_progress, 1)) > schedule_delayed_work(&krcp->page_cache_work, 1); > > if (head == NULL) I will double check! -- Vlad Rezki