Since the page is obtained in a fully preemptible context, dropping the lock can lead to migration onto another CPU. As a result a prev. bnode of that CPU may be underutilised, because a decision has been made for a CPU that was run out of free slots to store a pointer. migrate_disable/enable() are now independent of RT, use it in order to prevent any migration during a page request for a specific CPU it is requested for. Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> --- kernel/rcu/tree.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 454809514c91..cad36074366d 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3489,10 +3489,12 @@ add_ptr_to_bulk_krc_lock(struct kfree_rcu_cpu **krcp, (*krcp)->bkvhead[idx]->nr_records == KVFREE_BULK_MAX_ENTR) { bnode = get_cached_bnode(*krcp); if (!bnode && can_alloc) { + migrate_disable(); krc_this_cpu_unlock(*krcp, *flags); bnode = (struct kvfree_rcu_bulk_data *) __get_free_page(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOMEMALLOC | __GFP_NOWARN); *krcp = krc_this_cpu_lock(flags); + migrate_enable(); } if (!bnode) -- 2.20.1