The patch titled Subject: mm, CMA: clean-up CMA allocation error path has been added to the -mm tree. Its filename is mm-cma-clean-up-cma-allocation-error-path.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-cma-clean-up-cma-allocation-error-path.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-cma-clean-up-cma-allocation-error-path.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Subject: mm, CMA: clean-up CMA allocation error path We can remove one call sites for clear_cma_bitmap() if we first call it before checking error number. Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@xxxxxxx> Acked-by: Minchan Kim <minchan@xxxxxxxxxx> Reviewed-by: Michal Nazarewicz <mina86@xxxxxxxxxx> Reviewed-by: Zhang Yanfei <zhangyanfei@xxxxxxxxxxxxxx> Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxxxxxxx> Cc: Alexander Graf <agraf@xxxxxxx> Cc: Aneesh Kumar K.V <aneesh.kumar@xxxxxxxxxxxxxxxxxx> Cc: Gleb Natapov <gleb@xxxxxxxxxx> Cc: Marek Szyprowski <m.szyprowski@xxxxxxxxxxx> Cc: Paolo Bonzini <pbonzini@xxxxxxxxxx> Cc: Benjamin Herrenschmidt <benh@xxxxxxxxxxxxxxxxxxx> Cc: Paul Mackerras <paulus@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/cma.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff -puN mm/cma.c~mm-cma-clean-up-cma-allocation-error-path mm/cma.c --- a/mm/cma.c~mm-cma-clean-up-cma-allocation-error-path +++ a/mm/cma.c @@ -285,11 +285,12 @@ struct page *cma_alloc(struct cma *cma, if (ret == 0) { page = pfn_to_page(pfn); break; - } else if (ret != -EBUSY) { - cma_clear_bitmap(cma, pfn, count); - break; } + cma_clear_bitmap(cma, pfn, count); + if (ret != -EBUSY) + break; + pr_debug("%s(): memory range at %p is busy, retrying\n", __func__, pfn_to_page(pfn)); /* try again with a bit different memory target */ _ Patches currently in -mm which might be from iamjoonsoo.kim@xxxxxxx are slab-maintainer-update.patch slab-fix-oops-when-reading-proc-slab_allocators.patch mm-slabc-add-__init-to-init_lock_keys.patch slab-common-add-functions-for-kmem_cache_node-access.patch slub-use-new-node-functions.patch slub-use-new-node-functions-fix.patch slab-use-get_node-and-kmem_cache_node-functions.patch slab-use-get_node-and-kmem_cache_node-functions-fix.patch mm-slabh-wrap-the-whole-file-with-guarding-macro.patch vmalloc-use-rcu-list-iterator-to-reduce-vmap_area_lock-contention.patch memcg-cleanup-memcg_cache_params-refcnt-usage.patch memcg-destroy-kmem-caches-when-last-slab-is-freed.patch memcg-mark-caches-that-belong-to-offline-memcgs-as-dead.patch slub-dont-fail-kmem_cache_shrink-if-slab-placement-optimization-fails.patch slub-make-slab_free-non-preemptable.patch memcg-wait-for-kfrees-to-finish-before-destroying-cache.patch slub-make-dead-memcg-caches-discard-free-slabs-immediately.patch slab-do-not-keep-free-objects-slabs-on-dead-memcg-caches.patch dma-cma-fix-possible-memory-leak.patch dma-cma-separate-core-cma-management-codes-from-dma-apis.patch dma-cma-support-alignment-constraint-on-cma-region.patch dma-cma-support-arbitrary-bitmap-granularity.patch dma-cma-support-arbitrary-bitmap-granularity-fix.patch cma-generalize-cma-reserved-area-management-functionality.patch ppc-kvm-cma-use-general-cma-reserved-area-management-framework.patch mm-cma-clean-up-cma-allocation-error-path.patch mm-cma-change-cma_declare_contiguous-to-obey-coding-convention.patch mm-cma-clean-up-log-message.patch mm-compactionc-isolate_freepages_block-small-tuneup.patch page-owners-correct-page-order-when-to-free-page.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html