On Wed, Apr 22, 2020 at 09:18:41AM -0400, joel@xxxxxxxxxxxxxxxxx wrote: > > > On April 22, 2020 6:35:36 AM EDT, Uladzislau Rezki <urezki@xxxxxxxxx> wrote: > >On Mon, Apr 20, 2020 at 11:38:35AM -0400, Joel Fernandes (Google) > >wrote: > >> To keep kfree_rcu() path working on raw non-preemptible sections, > >> prevent the optional entry into the allocator as it uses sleeping > >locks. > >> In fact, even if the caller of kfree_rcu() is preemptible, this path > >> still is not, as krcp->lock is a raw spinlock as done in previous > >> patches. With additional page pre-allocation in the works, hitting > >this > >> return is going to be much less likely soon so just prevent it for > >now > >> so that PREEMPT_RT does not break. Note that page allocation here is > >an > >> optimization and skipping it still makes kfree_rcu() work. > >> > >> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> > >> Co-developed-by: Uladzislau Rezki <urezki@xxxxxxxxx> > >> Signed-off-by: Uladzislau Rezki <urezki@xxxxxxxxx> > >> Signed-off-by: Joel Fernandes (Google) <joel@xxxxxxxxxxxxxxxxx> > >> --- > >> kernel/rcu/tree.c | 12 ++++++++++++ > >> 1 file changed, 12 insertions(+) > >> > >> diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c > >> index cf68d3d9f5b81..cd61649e1b001 100644 > >> --- a/kernel/rcu/tree.c > >> +++ b/kernel/rcu/tree.c > >> @@ -3092,6 +3092,18 @@ kfree_call_rcu_add_ptr_to_bulk(struct > >kfree_rcu_cpu *krcp, > >> if (!bnode) { > >> WARN_ON_ONCE(sizeof(struct kfree_rcu_bulk_data) > PAGE_SIZE); > >> > >> + /* > >> + * To keep this path working on raw non-preemptible > >> + * sections, prevent the optional entry into the > >> + * allocator as it uses sleeping locks. In fact, even > >> + * if the caller of kfree_rcu() is preemptible, this > >> + * path still is not, as krcp->lock is a raw spinlock. > >> + * With additional page pre-allocation in the works, > >> + * hitting this return is going to be much less likely. > >> + */ > >> + if (IS_ENABLED(CONFIG_PREEMPT_RT)) > >> + return false; > >> + > >> bnode = (struct kfree_rcu_bulk_data *) > >> __get_free_page(GFP_NOWAIT | __GFP_NOWARN); > >> } > >This will not be correctly working by just reverting everything to the > >"rcu_head path" for CONFIG_PREEMPT_RT kernel. We need to preallocate at > >least once. I can add caching on top of this series to address it. > > I discussed with Vlad offline, this patch is fine for mainline as we > don't have headless support yet. So this patch is good. Future patches > adding caching will also add the headless support after additional > caching, so skipping the allocation here is ok. Sounds good! But would one of the realtime guys be willing to give a Tested-by? Thanx, Paul > Thanks. > > - Joel > > > > > >-- > >Vlad Rezki > > -- > Sent from my Android device with K-9 Mail. Please excuse my brevity.