The patch titled Subject: mm, mempool: do not allow atomic resizing has been added to the -mm tree. Its filename is mm-mempool-do-not-allow-atomic-resizing.patch This patch should soon appear at http://ozlabs.org/~akpm/mmots/broken-out/mm-mempool-do-not-allow-atomic-resizing.patch and later at http://ozlabs.org/~akpm/mmotm/broken-out/mm-mempool-do-not-allow-atomic-resizing.patch Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/SubmitChecklist when testing your code *** The -mm tree is included into linux-next and is updated there every 3-4 working days ------------------------------------------------------ From: David Rientjes <rientjes@xxxxxxxxxx> Subject: mm, mempool: do not allow atomic resizing Allocating a large number of elements in atomic context could quickly deplete memory reserves, so just disallow atomic resizing entirely. Nothing currently uses mempool_resize() with anything other than GFP_KERNEL, so convert existing callers to drop the gfp_mask. Signed-off-by: David Rientjes <rientjes@xxxxxxxxxx> Acked-by: Steffen Maier <maier@xxxxxxxxxxxxxxxxxx> [zfcp] Cc: Martin Schwidefsky <schwidefsky@xxxxxxxxxx> Cc: Heiko Carstens <heiko.carstens@xxxxxxxxxx> Cc: Steve French <sfrench@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- drivers/s390/scsi/zfcp_erp.c | 4 ++-- fs/cifs/connect.c | 6 ++---- include/linux/mempool.h | 2 +- mm/mempool.c | 9 +++++---- 4 files changed, 10 insertions(+), 11 deletions(-) diff -puN drivers/s390/scsi/zfcp_erp.c~mm-mempool-do-not-allow-atomic-resizing drivers/s390/scsi/zfcp_erp.c --- a/drivers/s390/scsi/zfcp_erp.c~mm-mempool-do-not-allow-atomic-resizing +++ a/drivers/s390/scsi/zfcp_erp.c @@ -738,11 +738,11 @@ static int zfcp_erp_adapter_strategy_ope return ZFCP_ERP_FAILED; if (mempool_resize(act->adapter->pool.sr_data, - act->adapter->stat_read_buf_num, GFP_KERNEL)) + act->adapter->stat_read_buf_num)) return ZFCP_ERP_FAILED; if (mempool_resize(act->adapter->pool.status_read_req, - act->adapter->stat_read_buf_num, GFP_KERNEL)) + act->adapter->stat_read_buf_num)) return ZFCP_ERP_FAILED; atomic_set(&act->adapter->stat_miss, act->adapter->stat_read_buf_num); diff -puN fs/cifs/connect.c~mm-mempool-do-not-allow-atomic-resizing fs/cifs/connect.c --- a/fs/cifs/connect.c~mm-mempool-do-not-allow-atomic-resizing +++ a/fs/cifs/connect.c @@ -773,8 +773,7 @@ static void clean_demultiplex_info(struc length = atomic_dec_return(&tcpSesAllocCount); if (length > 0) - mempool_resize(cifs_req_poolp, length + cifs_min_rcv, - GFP_KERNEL); + mempool_resize(cifs_req_poolp, length + cifs_min_rcv); } static int @@ -848,8 +847,7 @@ cifs_demultiplex_thread(void *p) length = atomic_inc_return(&tcpSesAllocCount); if (length > 1) - mempool_resize(cifs_req_poolp, length + cifs_min_rcv, - GFP_KERNEL); + mempool_resize(cifs_req_poolp, length + cifs_min_rcv); set_freezable(); while (server->tcpStatus != CifsExiting) { diff -puN include/linux/mempool.h~mm-mempool-do-not-allow-atomic-resizing include/linux/mempool.h --- a/include/linux/mempool.h~mm-mempool-do-not-allow-atomic-resizing +++ a/include/linux/mempool.h @@ -29,7 +29,7 @@ extern mempool_t *mempool_create_node(in mempool_free_t *free_fn, void *pool_data, gfp_t gfp_mask, int nid); -extern int mempool_resize(mempool_t *pool, int new_min_nr, gfp_t gfp_mask); +extern int mempool_resize(mempool_t *pool, int new_min_nr); extern void mempool_destroy(mempool_t *pool); extern void * mempool_alloc(mempool_t *pool, gfp_t gfp_mask); extern void mempool_free(void *element, mempool_t *pool); diff -puN mm/mempool.c~mm-mempool-do-not-allow-atomic-resizing mm/mempool.c --- a/mm/mempool.c~mm-mempool-do-not-allow-atomic-resizing +++ a/mm/mempool.c @@ -113,23 +113,24 @@ EXPORT_SYMBOL(mempool_create_node); * mempool_create(). * @new_min_nr: the new minimum number of elements guaranteed to be * allocated for this pool. - * @gfp_mask: the usual allocation bitmask. * * This function shrinks/grows the pool. In the case of growing, * it cannot be guaranteed that the pool will be grown to the new * size immediately, but new mempool_free() calls will refill it. + * This function may sleep. * * Note, the caller must guarantee that no mempool_destroy is called * while this function is running. mempool_alloc() & mempool_free() * might be called (eg. from IRQ contexts) while this function executes. */ -int mempool_resize(mempool_t *pool, int new_min_nr, gfp_t gfp_mask) +int mempool_resize(mempool_t *pool, int new_min_nr) { void *element; void **new_elements; unsigned long flags; BUG_ON(new_min_nr <= 0); + might_sleep(); spin_lock_irqsave(&pool->lock, flags); if (new_min_nr <= pool->min_nr) { @@ -145,7 +146,7 @@ int mempool_resize(mempool_t *pool, int spin_unlock_irqrestore(&pool->lock, flags); /* Grow the pool */ - new_elements = kmalloc(new_min_nr * sizeof(*new_elements), gfp_mask); + new_elements = kmalloc(new_min_nr * sizeof(*new_elements), GFP_KERNEL); if (!new_elements) return -ENOMEM; @@ -164,7 +165,7 @@ int mempool_resize(mempool_t *pool, int while (pool->curr_nr < pool->min_nr) { spin_unlock_irqrestore(&pool->lock, flags); - element = pool->alloc(gfp_mask, pool->pool_data); + element = pool->alloc(GFP_KERNEL, pool->pool_data); if (!element) goto out; spin_lock_irqsave(&pool->lock, flags); _ Patches currently in -mm which might be from rientjes@xxxxxxxxxx are mm-hugetlb-close-race-when-setting-pagetail-for-gigantic-pages.patch mm-fix-anon_vma-degree-underflow-in-anon_vma-endless-growing-prevention.patch mm-fix-anon_vma-degree-underflow-in-anon_vma-endless-growing-prevention-v2.patch mm-pagewalk-prevent-positive-return-value-of-walk_page_test-from-being-passed-to-callers.patch cxgb4-drop-__gfp_nofail-allocation.patch sh-dwarf-destroy-mempools-on-cleanup.patch sh-dwarf-use-mempool_create_slab_pool.patch jbd2-revert-must-not-fail-allocation-loops-back-to-gfp_nofail.patch mm-slub-parse-slub_debug-o-option-in-switch-statement.patch mm-rename-foll_mlock-to-foll_populate.patch mm-rename-__mlock_vma_pages_range-to-populate_vma_page_range.patch mm-move-gup-posix-mlock-error-conversion-out-of-__mm_populate.patch mm-move-mm_populate-related-code-to-mm-gupc.patch mm-hotplug-fix-concurrent-memory-hot-add-deadlock.patch mm-cma-change-fallback-behaviour-for-cma-freepage.patch mm-page_alloc-factor-out-fallback-freepage-checking.patch mm-compaction-enhance-compaction-finish-condition.patch mm-compaction-enhance-compaction-finish-condition-fix.patch mm-incorporate-zero-pages-into-transparent-huge-pages.patch mm-incorporate-zero-pages-into-transparent-huge-pages-fix.patch mm-completely-remove-dumping-per-cpu-lists-from-show_mem.patch mm-mempolicy-migrate_to_node-should-only-migrate-to-node.patch mm-remove-gfp_thisnode.patch mm-thp-really-limit-transparent-hugepage-allocation-to-local-node.patch kernel-cpuset-remove-exception-for-__gfp_thisnode.patch mm-clarify-__gfp_nofail-deprecation-status.patch mm-clarify-__gfp_nofail-deprecation-status-checkpatch-fixes.patch sparc-clarify-__gfp_nofail-allocation.patch mm-mempool-do-not-allow-atomic-resizing.patch mm-utilc-add-kstrimdup.patch -- To unsubscribe from this list: send the line "unsubscribe mm-commits" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html