Enter the page allocator with newly introduced __GFP_NO_LOCKS flag instead of former GFP_NOWAIT | __GFP_NOWARN sequence. Such approach address two concerns. See them below: a) If built with CONFIG_PROVE_RAW_LOCK_NESTING, the lockdep complains about violation("BUG: Invalid wait context") of the nesting rules. It does the raw_spinlock vs. spinlock nesting checks, i.e. it is not legal to acquire a spinlock_t while holding a raw_spinlock_t. Internally the kfree_rcu() uses raw_spinlock_t whereas the page allocator internally deals with spinlock_t to access to its zones. The code also can be broken from higher level of view: <snip> raw_spin_lock(&some_lock); kfree_rcu(some_pointer, some_field_offset); <snip> b) If built with CONFIG_PREEMPT_RT. Please note, in that case spinlock_t is converted into sleepable variant. Invoking the page allocator from atomic contexts leads to: "BUG: scheduling while atomic". Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 30e7e252b9e7..48cb64800108 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3327,7 +3327,7 @@ kvfree_call_rcu_add_ptr_to_bulk(struct kfree_rcu_cpu *krcp, void *ptr) * pages are available. */ bnode = (struct kvfree_rcu_bulk_data *) - __get_free_page(GFP_NOWAIT | __GFP_NOWARN); + __get_free_page(__GFP_NO_LOCKS); } /* Switch to emergency path. */ -- 2.20.1