The patch titled Subject: zsmalloc: turn chain size config option into UL constant has been added to the -mm mm-unstable branch. Its filename is zsmalloc-make-zspage-chain-size-configurable-fix.patch This patch will shortly appear at https://git.kernel.org/pub/scm/linux/kernel/git/akpm/25-new.git/tree/patches/zsmalloc-make-zspage-chain-size-configurable-fix.patch This patch will later appear in the mm-unstable branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm Before you just go and hit "reply", please: a) Consider who else should be cc'ed b) Prefer to cc a suitable mailing list as well c) Ideally: find the original patch on the mailing list and do a reply-to-all to that, adding suitable additional cc's *** Remember to use Documentation/process/submit-checklist.rst when testing your code *** The -mm tree is included into linux-next via the mm-everything branch at git://git.kernel.org/pub/scm/linux/kernel/git/akpm/mm and is updated there every 2-3 working days ------------------------------------------------------ From: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Subject: zsmalloc: turn chain size config option into UL constant Date: Thu, 12 Jan 2023 16:14:43 +0900 This fixes >> mm/zsmalloc.c:122:59: warning: right shift count >= width of type [-Wshift-count-overflow] and >> mm/zsmalloc.c:224:28: error: variably modified 'size_class' at file scope 224 | struct size_class *size_class[ZS_SIZE_CLASSES]; Link: https://lkml.kernel.org/r/20230112071443.1933880-1-senozhatsky@xxxxxxxxxxxx Reported-by: kernel test robot <lkp@xxxxxxxxx> Signed-off-by: Sergey Senozhatsky <senozhatsky@xxxxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- mm/zsmalloc.c | 11 +++++++---- 1 file changed, 7 insertions(+), 4 deletions(-) --- a/mm/zsmalloc.c~zsmalloc-make-zspage-chain-size-configurable-fix +++ a/mm/zsmalloc.c @@ -133,9 +133,12 @@ #define MAGIC_VAL_BITS 8 #define MAX(a, b) ((a) >= (b) ? (a) : (b)) + +#define ZS_MAX_PAGES_PER_ZSPAGE (_AC(CONFIG_ZSMALLOC_CHAIN_SIZE, UL)) + /* ZS_MIN_ALLOC_SIZE must be multiple of ZS_ALIGN */ #define ZS_MIN_ALLOC_SIZE \ - MAX(32, (CONFIG_ZSMALLOC_CHAIN_SIZE << PAGE_SHIFT >> OBJ_INDEX_BITS)) + MAX(32, (ZS_MAX_PAGES_PER_ZSPAGE << PAGE_SHIFT >> OBJ_INDEX_BITS)) /* each chunk includes extra space to keep handle */ #define ZS_MAX_ALLOC_SIZE PAGE_SIZE @@ -1119,7 +1122,7 @@ static struct zspage *alloc_zspage(struc gfp_t gfp) { int i; - struct page *pages[CONFIG_ZSMALLOC_CHAIN_SIZE]; + struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE]; struct zspage *zspage = cache_alloc_zspage(pool, gfp); if (!zspage) @@ -1986,7 +1989,7 @@ static void replace_sub_page(struct size struct page *newpage, struct page *oldpage) { struct page *page; - struct page *pages[CONFIG_ZSMALLOC_CHAIN_SIZE] = {NULL, }; + struct page *pages[ZS_MAX_PAGES_PER_ZSPAGE] = {NULL, }; int idx = 0; page = get_first_page(zspage); @@ -2366,7 +2369,7 @@ static int calculate_zspage_chain_size(i if (is_power_of_2(class_size)) return chain_size; - for (i = 1; i <= CONFIG_ZSMALLOC_CHAIN_SIZE; i++) { + for (i = 1; i <= ZS_MAX_PAGES_PER_ZSPAGE; i++) { int waste; waste = (i * PAGE_SIZE) % class_size; _ Patches currently in -mm which might be from senozhatsky@xxxxxxxxxxxx are zram-correctly-handle-all-next_arg-cases.patch zsmalloc-rework-zspage-chain-size-selection.patch zsmalloc-skip-chain-size-calculation-for-pow_of_2-classes.patch zsmalloc-make-zspage-chain-size-configurable.patch zsmalloc-make-zspage-chain-size-configurable-fix.patch zsmalloc-set-default-zspage-chain-size-to-8.patch