The patch titled Subject: zram: remove second stage of handle allocation has been added to the -mm mm-unstable branch. Its filename is zram-remove-second-stage-of-handle-allocation.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/zram-remove-second-stage-of-handle-allocation.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Subject: zram: remove second stage of handle allocation Date: Mon, 3 Mar 2025 11:03:14 +0900 Previously zram write() was atomic which required us to pass __GFP_KSWAPD_RECLAIM to zsmalloc handle allocation on a fast path and attempt a slow path allocation (with recompression) if the fast path failed. Since we are not in atomic context anymore we can permit direct reclaim during handle allocation, and hence can have a single allocation path. There is no slow path anymore so we don't unlock per-CPU stream (and don't lose compressed data) which means that there is no need to do recompression now (which should reduce CPU and battery usage). Link: https://lkml.kernel.org/r/20250303022425.285971-6-senozhatsky@xxxxxxxxxxxx Signed-off-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Cc: Hillf Danton <hdanton@xxxxxxxx> Cc: Kairui Song <ryncsn@xxxxxxxxx> Cc: Minchan Kim <minchan@xxxxxxxxxx> Cc: Sebastian Andrzej Siewior <bigeasy@xxxxxxxxxxxxx> Cc: Yosry Ahmed <yosry.ahmed@xxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- drivers/block/zram/zram_drv.c | 39 +++++--------------------------- 1 file changed, 7 insertions(+), 32 deletions(-) --- a/drivers/block/zram/zram_drv.c~zram-remove-second-stage-of-handle-allocation +++ a/drivers/block/zram/zram_drv.c @@ -1723,11 +1723,11 @@ static int write_incompressible_page(str static int zram_write_page(struct zram *zram, struct page *page, u32 index) { int ret = 0; - unsigned long handle = -ENOMEM; - unsigned int comp_len = 0; + unsigned long handle; + unsigned int comp_len; void *dst, *mem; struct zcomp_strm *zstrm; - unsigned long element = 0; + unsigned long element; bool same_filled; /* First, free memory allocated to this slot (if any) */ @@ -1741,7 +1741,6 @@ static int zram_write_page(struct zram * if (same_filled) return write_same_filled_page(zram, element, index); -compress_again: zstrm = zcomp_stream_get(zram->comps[ZRAM_PRIMARY_COMP]); mem = kmap_local_page(page); ret = zcomp_compress(zram->comps[ZRAM_PRIMARY_COMP], zstrm, @@ -1751,7 +1750,6 @@ compress_again: if (unlikely(ret)) { zcomp_stream_put(zstrm); pr_err("Compression failed! err=%d\n", ret); - zs_free(zram->mem_pool, handle); return ret; } @@ -1760,35 +1758,12 @@ compress_again: return write_incompressible_page(zram, page, index); } - /* - * handle allocation has 2 paths: - * a) fast path is executed with preemption disabled (for - * per-cpu streams) and has __GFP_DIRECT_RECLAIM bit clear, - * since we can't sleep; - * b) slow path enables preemption and attempts to allocate - * the page with __GFP_DIRECT_RECLAIM bit set. we have to - * put per-cpu compression stream and, thus, to re-do - * the compression once handle is allocated. - * - * if we have a 'non-null' handle here then we are coming - * from the slow path and handle has already been allocated. - */ - if (IS_ERR_VALUE(handle)) - handle = zs_malloc(zram->mem_pool, comp_len, - __GFP_KSWAPD_RECLAIM | - __GFP_NOWARN | - __GFP_HIGHMEM | - __GFP_MOVABLE); + handle = zs_malloc(zram->mem_pool, comp_len, + GFP_NOIO | __GFP_NOWARN | + __GFP_HIGHMEM | __GFP_MOVABLE); if (IS_ERR_VALUE(handle)) { zcomp_stream_put(zstrm); - atomic64_inc(&zram->stats.writestall); - handle = zs_malloc(zram->mem_pool, comp_len, - GFP_NOIO | __GFP_HIGHMEM | - __GFP_MOVABLE); - if (IS_ERR_VALUE(handle)) - return PTR_ERR((void *)handle); - - goto compress_again; + return PTR_ERR((void *)handle); } if (!zram_can_store_page(zram)) { _ Patches currently in -mm which might be from senozhatsky@xxxxxxxxxxxx are zram-sleepable-entry-locking.patch zram-permit-preemption-with-active-compression-stream.patch zram-remove-unused-crypto-include.patch zram-remove-max_comp_streams-device-attr.patch zram-remove-second-stage-of-handle-allocation.patch zram-add-gfp_nowarn-to-incompressible-zsmalloc-handle-allocation.patch zram-remove-writestall-zram_stats-member.patch zram-limit-max-recompress-prio-to-num_active_comps.patch zram-filter-out-recomp-targets-based-on-priority.patch zram-rework-recompression-loop.patch zram-move-post-processing-target-allocation.patch zsmalloc-rename-pool-lock.patch zsmalloc-sleepable-zspage-reader-lock.patch zsmalloc-introduce-new-object-mapping-api.patch zram-switch-to-new-zsmalloc-object-mapping-api.patch zram-permit-reclaim-in-zstd-custom-allocator.patch zram-do-not-leak-page-on-recompress_store-error-path.patch zram-do-not-leak-page-on-writeback_store-error-path.patch zram-add-might_sleep-to-zcomp-api.patch