This is a note to let you know that I've just added the patch titled net/dst: use a smaller percpu_counter batch for dst entries accounting to the 4.19-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: net-dst-use-a-smaller-percpu_counter-batch-for-dst-entries-accounting.patch and it can be found in the queue-4.19 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From SRS0=s4sW=IX=amazon.com=prvs=73518ea15=surajjs@xxxxxxxxxx Sat Jan 13 01:55:35 2024 From: Suraj Jitindar Singh <surajjs@xxxxxxxxxx> Date: Fri, 12 Jan 2024 16:53:06 -0800 Subject: net/dst: use a smaller percpu_counter batch for dst entries accounting To: <stable@xxxxxxxxxxxxxxx> Cc: <gregkh@xxxxxxxxxxxxxxxxxxx>, <trawets@xxxxxxxxxx>, <security@xxxxxxxxxx>, Eric Dumazet <edumazet@xxxxxxxxxx>, Jakub Kicinski <kuba@xxxxxxxxxx>, "Suraj Jitindar Singh" <surajjs@xxxxxxxxxx> Message-ID: <20240113005308.2422331-2-surajjs@xxxxxxxxxx> From: Eric Dumazet <edumazet@xxxxxxxxxx> commit cf86a086a18095e33e0637cb78cda1fcf5280852 upstream. percpu_counter_add() uses a default batch size which is quite big on platforms with 256 cpus. (2*256 -> 512) This means dst_entries_get_fast() can be off by +/- 2*(nr_cpus^2) (131072 on servers with 256 cpus) Reduce the batch size to something more reasonable, and add logic to ip6_dst_gc() to call dst_entries_get_slow() before calling the _very_ expensive fib6_run_gc() function. Signed-off-by: Eric Dumazet <edumazet@xxxxxxxxxx> Signed-off-by: Jakub Kicinski <kuba@xxxxxxxxxx> Signed-off-by: Suraj Jitindar Singh <surajjs@xxxxxxxxxx> Cc: <stable@xxxxxxxxxxxxxxx> # 4.19.x Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- include/net/dst_ops.h | 4 +++- net/core/dst.c | 8 ++++---- net/ipv6/route.c | 3 +++ 3 files changed, 10 insertions(+), 5 deletions(-) --- a/include/net/dst_ops.h +++ b/include/net/dst_ops.h @@ -53,9 +53,11 @@ static inline int dst_entries_get_slow(s return percpu_counter_sum_positive(&dst->pcpuc_entries); } +#define DST_PERCPU_COUNTER_BATCH 32 static inline void dst_entries_add(struct dst_ops *dst, int val) { - percpu_counter_add(&dst->pcpuc_entries, val); + percpu_counter_add_batch(&dst->pcpuc_entries, val, + DST_PERCPU_COUNTER_BATCH); } static inline int dst_entries_init(struct dst_ops *dst) --- a/net/core/dst.c +++ b/net/core/dst.c @@ -97,11 +97,11 @@ void *dst_alloc(struct dst_ops *ops, str { struct dst_entry *dst; - if (ops->gc && dst_entries_get_fast(ops) > ops->gc_thresh) { + if (ops->gc && + !(flags & DST_NOCOUNT) && + dst_entries_get_fast(ops) > ops->gc_thresh) { if (ops->gc(ops)) { - printk_ratelimited(KERN_NOTICE "Route cache is full: " - "consider increasing sysctl " - "net.ipv[4|6].route.max_size.\n"); + pr_notice_ratelimited("Route cache is full: consider increasing sysctl net.ipv6.route.max_size.\n"); return NULL; } } --- a/net/ipv6/route.c +++ b/net/ipv6/route.c @@ -2778,6 +2778,9 @@ static int ip6_dst_gc(struct dst_ops *op int entries; entries = dst_entries_get_fast(ops); + if (entries > rt_max_size) + entries = dst_entries_get_slow(ops); + if (time_after(rt_last_gc + rt_min_interval, jiffies) && entries <= rt_max_size) goto out; Patches currently in stable-queue which might be from surajjs@xxxxxxxxxx are queue-4.19/ipv6-remove-max_size-check-inline-with-ipv4.patch queue-4.19/net-dst-use-a-smaller-percpu_counter-batch-for-dst-entries-accounting.patch queue-4.19/net-add-a-route-cache-full-diagnostic-message.patch