For a single argument and its slow path, switch to expedited version of synchronize_rcu(). This version is considered to be more faster, thus under a high memory pressure a slow path becoms more efficient. Note, latency-sensitive workloads should use rcutree.gp_normal=1, which will make that synchronize_rcu_expedited() act like a regular synchronize_rcu(), so no harm done to them. Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> --- kernel/rcu/tree.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c index 182772494cb0..87a64fcffa7f 100644 --- a/kernel/rcu/tree.c +++ b/kernel/rcu/tree.c @@ -3856,7 +3856,7 @@ void kvfree_call_rcu(struct rcu_head *head, void *ptr) */ if (!success) { debug_rcu_head_unqueue((struct rcu_head *) ptr); - cond_synchronize_rcu_full(&old_snap); + cond_synchronize_rcu_expedited_full(&old_snap); kvfree(ptr); } } -- 2.39.2