This is a note to let you know that I've just added the patch titled flowcache: Increase threshold for refusing new allocations to the 4.8-stable tree which can be found at: http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary The filename of the patch is: flowcache-increase-threshold-for-refusing-new-allocations.patch and it can be found in the queue-4.8 subdirectory. If you, or anyone else, feels it should not be added to the stable tree, please let <stable@xxxxxxxxxxxxxxx> know about it. >From 6b226487815574193c1da864f2eac274781a2b0c Mon Sep 17 00:00:00 2001 From: Miroslav Urbanek <mu@xxxxxxxxxxxxxxxxxxx> Date: Mon, 21 Nov 2016 15:48:21 +0100 Subject: flowcache: Increase threshold for refusing new allocations MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit From: Miroslav Urbanek <mu@xxxxxxxxxxxxxxxxxxx> commit 6b226487815574193c1da864f2eac274781a2b0c upstream. The threshold for OOM protection is too small for systems with large number of CPUs. Applications report ENOBUFs on connect() every 10 minutes. The problem is that the variable net->xfrm.flow_cache_gc_count is a global counter while the variable fc->high_watermark is a per-CPU constant. Take the number of CPUs into account as well. Fixes: 6ad3122a08e3 ("flowcache: Avoid OOM condition under preasure") Reported-by: Lukáš Koldrt <lk@xxxxxxxxxx> Tested-by: Jan Hejl <jh@xxxxxxxxxx> Signed-off-by: Miroslav Urbanek <mu@xxxxxxxxxxxxxxxxxxx> Signed-off-by: Steffen Klassert <steffen.klassert@xxxxxxxxxxx> Signed-off-by: Greg Kroah-Hartman <gregkh@xxxxxxxxxxxxxxxxxxx> --- net/core/flow.c | 6 ++---- 1 file changed, 2 insertions(+), 4 deletions(-) --- a/net/core/flow.c +++ b/net/core/flow.c @@ -95,7 +95,6 @@ static void flow_cache_gc_task(struct wo list_for_each_entry_safe(fce, n, &gc_list, u.gc_list) { flow_entry_kill(fce, xfrm); atomic_dec(&xfrm->flow_cache_gc_count); - WARN_ON(atomic_read(&xfrm->flow_cache_gc_count) < 0); } } @@ -236,9 +235,8 @@ flow_cache_lookup(struct net *net, const if (fcp->hash_count > fc->high_watermark) flow_cache_shrink(fc, fcp); - if (fcp->hash_count > 2 * fc->high_watermark || - atomic_read(&net->xfrm.flow_cache_gc_count) > fc->high_watermark) { - atomic_inc(&net->xfrm.flow_cache_genid); + if (atomic_read(&net->xfrm.flow_cache_gc_count) > + 2 * num_online_cpus() * fc->high_watermark) { flo = ERR_PTR(-ENOBUFS); goto ret_object; } Patches currently in stable-queue which might be from mu@xxxxxxxxxxxxxxxxxxx are queue-4.8/flowcache-increase-threshold-for-refusing-new-allocations.patch -- To unsubscribe from this list: send the line "unsubscribe stable" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html