This is a note to let you know that I've just added the patch titled bpf: report RCU QS in cpumap kthread to the 4.19-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: bpf-report-rcu-qs-in-cpumap-kthread.patch and it can be found in the queue-4.19 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. commit 640adab17cd2c6afde6f29eb470a4b3c8cad3553 Author: Yan Zhai <yan@xxxxxxxxxxxxxx> Date: Tue Mar 19 13:44:40 2024 -0700 bpf: report RCU QS in cpumap kthread [ Upstream commit 00bf63122459e87193ee7f1bc6161c83a525569f ] When there are heavy load, cpumap kernel threads can be busy polling packets from redirect queues and block out RCU tasks from reaching quiescent states. It is insufficient to just call cond_resched() in such context. Periodically raise a consolidated RCU QS before cond_resched fixes the problem. Fixes: 6710e1126934 ("bpf: introduce new bpf cpu map type BPF_MAP_TYPE_CPUMAP") Reviewed-by: Jesper Dangaard Brouer <hawk@xxxxxxxxxx> Signed-off-by: Yan Zhai <yan@xxxxxxxxxxxxxx> Acked-by: Paul E. McKenney <paulmck@xxxxxxxxxx> Acked-by: Jesper Dangaard Brouer <hawk@xxxxxxxxxx> Link: https://lore.kernel.org/r/c17b9f1517e19d813da3ede5ed33ee18496bb5d8.1710877680.git.yan@xxxxxxxxxxxxxx Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx> Signed-off-by: Sasha Levin <sashal@xxxxxxxxxx> diff --git a/kernel/bpf/cpumap.c b/kernel/bpf/cpumap.c index 61fbcae82f0a1..6f7548138435b 100644 --- a/kernel/bpf/cpumap.c +++ b/kernel/bpf/cpumap.c @@ -243,6 +243,7 @@ static void put_cpu_map_entry(struct bpf_cpu_map_entry *rcpu) static int cpu_map_kthread_run(void *data) { struct bpf_cpu_map_entry *rcpu = data; + unsigned long last_qs = jiffies; set_current_state(TASK_INTERRUPTIBLE); @@ -262,10 +263,12 @@ static int cpu_map_kthread_run(void *data) if (__ptr_ring_empty(rcpu->queue)) { schedule(); sched = 1; + last_qs = jiffies; } else { __set_current_state(TASK_RUNNING); } } else { + rcu_softirq_qs_periodic(last_qs); sched = cond_resched(); }