The quilt patch titled Subject: mm-treewide-redefine-max_order-sanely-fix-3 has been removed from the -mm tree. Its filename was mm-treewide-redefine-max_order-sanely-fix-3.patch This patch was dropped because it was folded into mm-treewide-redefine-max_order-sanely.patch ------------------------------------------------------ From: "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx> Subject: mm-treewide-redefine-max_order-sanely-fix-3 Date: Fri, 17 Mar 2023 02:21:44 +0300 fixups per Zi Yan Link: https://lkml.kernel.org/r/20230316232144.b7ic4cif4kjiabws@xxxxxxxxxxxxxxxxx Signed-off-by: "Kirill A. Shutemov" <kirill@xxxxxxxxxxxxx> Reviewed-by: Zi Yan <ziy@xxxxxxxxxx> Signed-off-by: Andrew Morton <akpm@xxxxxxxxxxxxxxxxxxxx> --- Documentation/admin-guide/kdump/vmcoreinfo.rst | 2 +- kernel/events/ring_buffer.c | 2 +- mm/Kconfig | 4 ++-- mm/slub.c | 2 +- 4 files changed, 5 insertions(+), 5 deletions(-) --- a/Documentation/admin-guide/kdump/vmcoreinfo.rst~mm-treewide-redefine-max_order-sanely-fix-3 +++ a/Documentation/admin-guide/kdump/vmcoreinfo.rst @@ -189,7 +189,7 @@ Offsets of the vmap_area's members. They information. Makedumpfile gets the start address of the vmalloc region from this. -(zone.free_area, MAX_ORDER) +(zone.free_area, MAX_ORDER + 1) --------------------------- Free areas descriptor. User-space tools use this value to iterate the --- a/kernel/events/ring_buffer.c~mm-treewide-redefine-max_order-sanely-fix-3 +++ a/kernel/events/ring_buffer.c @@ -814,7 +814,7 @@ struct perf_buffer *rb_alloc(int nr_page size = sizeof(struct perf_buffer); size += nr_pages * sizeof(void *); - if (order_base_2(size) >= PAGE_SHIFT+MAX_ORDER) + if (order_base_2(size) > PAGE_SHIFT+MAX_ORDER) goto fail; node = (cpu == -1) ? cpu : cpu_to_node(cpu); --- a/mm/Kconfig~mm-treewide-redefine-max_order-sanely-fix-3 +++ a/mm/Kconfig @@ -666,8 +666,8 @@ config HUGETLB_PAGE_SIZE_VARIABLE HUGETLB_PAGE_ORDER when there are multiple HugeTLB page sizes available on a platform. - Note that the pageblock_order cannot exceed MAX_ORDER - 1 and will be - clamped down to MAX_ORDER - 1. + Note that the pageblock_order cannot exceed MAX_ORDER and will be + clamped down to MAX_ORDER. config CONTIG_ALLOC def_bool (MEMORY_ISOLATION && COMPACTION) || CMA --- a/mm/slub.c~mm-treewide-redefine-max_order-sanely-fix-3 +++ a/mm/slub.c @@ -4697,7 +4697,7 @@ __setup("slub_min_order=", setup_slub_mi static int __init setup_slub_max_order(char *str) { get_option(&str, (int *)&slub_max_order); - slub_max_order = min(slub_max_order, (unsigned int)MAX_ORDER); + slub_max_order = min_t(unsigned int, slub_max_order, MAX_ORDER); return 1; } _ Patches currently in -mm which might be from kirill@xxxxxxxxxxxxx are mm-treewide-redefine-max_order-sanely.patch