The patch titled Subject: mm/vmalloc.c: preload a CPU with one object for split purpose has been added to the -mm tree. Its filename is mm-vmap-preload-a-cpu-with-one-object-for-split-purpose.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-vmap-preload-a-cpu-with-one-object-for-split-purpose.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-vmap-preload-a-cpu-with-one-object-for-split-purpose.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: "Uladzislau Rezki (Sony)" <urezki@xxxxxxxxx> Subject: mm/vmalloc.c: preload a CPU with one object for split purpose Refactor the NE_FIT_TYPE split case when it comes to an allocation of one extra object. We need it in order to build a remaining space. Introduce ne_fit_preload()/ne_fit_preload_end() functions for preloading one extra vmap_area object to ensure that we have it available when fit type is NE_FIT_TYPE. The preload is done per CPU in non-atomic context thus with GFP_KERNEL allocation masks. More permissive parameters can be beneficial for systems which are suffer from high memory pressure or low memory condition. Link: http://lkml.kernel.org/r/20190527151843.27416-3-urezki@xxxxxxxxx Signed-off-by: Uladzislau Rezki (Sony) <urezki@xxxxxxxxx> Reviewed-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> Cc: Hillf Danton <hdanton@xxxxxxxx> Cc: Ingo Molnar <mingo@xxxxxxx> Cc: Joel Fernandes <joelaf@xxxxxxxxxx> Cc: Matthew Wilcox <willy@xxxxxxxxxxxxx> Cc: Michal Hocko <mhocko@xxxxxxxx> Cc: Oleksiy Avramchenko <oleksiy.avramchenko@xxxxxxxxxxxxxx> Cc: Roman Gushchin <guro@xxxxxx> Cc: Steven Rostedt <rostedt@xxxxxxxxxxx> Cc: Tejun Heo <tj@xxxxxxxxxx> Cc: Thomas Garnier <thgarnie@xxxxxxxxxx> Cc: Thomas Gleixner <tglx@xxxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/vmalloc.c | 79 +++++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 76 insertions(+), 3 deletions(-) --- a/mm/vmalloc.c~mm-vmap-preload-a-cpu-with-one-object-for-split-purpose +++ a/mm/vmalloc.c @@ -604,6 +604,13 @@ static LIST_HEAD(free_vmap_area_list); */ static struct rb_root free_vmap_area_root = RB_ROOT; +/* + * Preload a CPU with one object for "no edge" split case. The + * aim is to get rid of allocations from the atomic context, thus + * to use more permissive allocation masks. + */ +static DEFINE_PER_CPU(struct vmap_area *, ne_fit_preload_node); + static __always_inline unsigned long va_size(struct vmap_area *va) { @@ -1190,9 +1197,24 @@ adjust_va_to_fit_type(struct vmap_area * * L V NVA V R * |---|-------|---| */ - lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT); - if (unlikely(!lva)) - return -1; + lva = __this_cpu_xchg(ne_fit_preload_node, NULL); + if (unlikely(!lva)) { + /* + * For percpu allocator we do not do any pre-allocation + * and leave it as it is. The reason is it most likely + * never ends up with NE_FIT_TYPE splitting. In case of + * percpu allocations offsets and sizes are aligned to + * fixed align request, i.e. RE_FIT_TYPE and FL_FIT_TYPE + * are its main fitting cases. + * + * There are a few exceptions though, as an example it is + * a first allocation (early boot up) when we have "one" + * big free space that has to be split. + */ + lva = kmem_cache_alloc(vmap_area_cachep, GFP_NOWAIT); + if (!lva) + return -1; + } /* * Build the remainder. @@ -1263,6 +1285,48 @@ __alloc_vmap_area(unsigned long size, un } /* + * Preload this CPU with one extra vmap_area object to ensure + * that we have it available when fit type of free area is + * NE_FIT_TYPE. + * + * The preload is done in non-atomic context, thus it allows us + * to use more permissive allocation masks to be more stable under + * low memory condition and high memory pressure. + * + * If success it returns 1 with preemption disabled. In case + * of error 0 is returned with preemption not disabled. Note it + * has to be paired with ne_fit_preload_end(). + */ +static int +ne_fit_preload(int nid) +{ + preempt_disable(); + + if (!__this_cpu_read(ne_fit_preload_node)) { + struct vmap_area *node; + + preempt_enable(); + node = kmem_cache_alloc_node(vmap_area_cachep, GFP_KERNEL, nid); + if (node == NULL) + return 0; + + preempt_disable(); + + if (__this_cpu_cmpxchg(ne_fit_preload_node, NULL, node)) + kmem_cache_free(vmap_area_cachep, node); + } + + return 1; +} + +static void +ne_fit_preload_end(int preloaded) +{ + if (preloaded) + preempt_enable(); +} + +/* * Allocate a region of KVA of the specified size and alignment, within the * vstart and vend. */ @@ -1274,6 +1338,7 @@ static struct vmap_area *alloc_vmap_area struct vmap_area *va; unsigned long addr; int purged = 0; + int preloaded; BUG_ON(!size); BUG_ON(offset_in_page(size)); @@ -1296,6 +1361,12 @@ static struct vmap_area *alloc_vmap_area kmemleak_scan_area(&va->rb_node, SIZE_MAX, gfp_mask & GFP_RECLAIM_MASK); retry: + /* + * Even if it fails we do not really care about that. + * Just proceed as it is. "overflow" path will refill + * the cache we allocate from. + */ + preloaded = ne_fit_preload(node); spin_lock(&vmap_area_lock); /* @@ -1303,6 +1374,8 @@ retry: * returned. Therefore trigger the overflow path. */ addr = __alloc_vmap_area(size, align, vstart, vend); + ne_fit_preload_end(preloaded); + if (unlikely(addr == vend)) goto overflow; _ Patches currently in -mm which might be from urezki@xxxxxxxxx are mm-vmap-remove-node-argument.patch mm-vmap-preload-a-cpu-with-one-object-for-split-purpose.patch mm-vmap-get-rid-of-one-single-unlink_va-when-merge.patch mm-vmap-switch-to-warn_on-and-move-it-under-unlink_va.patch